Columns

Manual Skills in the AI-Enabled Landscape: Use it or Lose it?

How to ensure a good balance between using the information from AI-enabled devices without relying on them too much.

Author Image

By: Hannah Taggart

Engineer and Regulatory Specialist, Empirical Technologies, an ATS Company

Photo: Alice/stock.adobe.com

Advances in technology have always aimed to improve human lives, many times in the form of devices that assist in daily life activities. The automobile backup camera was first introduced in 1956 but was not included in passenger vehicles in the U.S. until the early 2000s. Before this time, drivers had to check behind the vehicle manually, throwing their right hand behind the passenger seat and scanning the area behind the car. 

Eventually, as the backup camera became a more standard component, a generation of new drivers were taught to check behind the vehicle manually but could also rely on the backup camera for support. Now, there are drivers who have probably never had to back up a car without the assistance of this technology—a humbling experience for some newer drivers who receive a rental car without a backup camera. 

Overreliance on technology may diminish skills that have historically been completed by manual means. Could “use it or lose it” become a reality in the medical field?

The medical device industry has recognized the importance of artificial intelligence (AI) and how it can be integrated into medical devices. More AI-enabled devices are hitting the market; it’s time to have an important conversation about what the long-term regulatory landscape looks like for these devices and how we prevent medical professionals from becoming too reliant on the technology. 

As more medical products begin to incorporate this type of technology, it is critical to ensure the safety and efficacy of these devices. The U.S. Food and Drug Administration (FDA) is working to keep up with the evolving landscape. 

In the new draft Guidance Document, “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations: Draft Guidance for Industry and Food and Drug Administration Staff,” the FDA aims to set updated expectations for information included in premarket submissions for AI-enabled devices. 

In this draft guidance document, the FDA defines AI-enabled devices as “devices that include one or more AI-enabled device software functions.” Many of the AI-enabled medical devices on the market are trained by providing the software with defined data sets. These data sets typically include images that have been pre-categorized by medical professionals. The software begins to learn from this training data set by identifying patterns, which are then used to classify new images.

Some of the main goals of the draft guidance include ensuring transparency in the software provided and reducing the bias throughout the total product lifecycle. AI bias can have a negative impact on the safety and efficacy of medical devices, so the FDA has provided more specific guidance on best practices to avoid this in the development of a device. Additionally, the FDA has recommended specific information be included in the labeling of these devices, including the type of learning the software includes, specificity and sensitivity, and a breakdown of the population of the training data.

Machine learning algorithms can vary from locked to adaptive. Locked algorithms always have the same result—given the same input—and require the software developer to make edits to the program for any updates. An adaptive algorithm has the ability to update the software itself, given additional data. 

Both types of algorithms present their own advantages and challenges. Currently, only devices with locked algorithms have been cleared or approved by the FDA in medical devices.

Collecting adequate training and validation data is the manufacturer’s responsibility. The FDA provides best practices for ensuring enough variability is included in the data sets. However, manufacturers will need to provide very specific information about training and validation data in a premarket submission as FDA begins to place a microscope on the details. This requirement for transparency aims to help medical professionals and patients gain more confidence in understanding the algorithm better.

One concern surrounding AI-enabled devices is the occurrence of data-drift. Data-drift happens when the performance of a learning model begins to degrade due to changes in the data, resulting in a decrease in accuracy. This is a concern for AI-enabled medical devices as it causes the patient population, imaging, or disease itself to change over time. Manufacturers should include data-drift in their risk analysis throughout the product lifecycle to provide confidence that the algorithm stays current to the expected outputs. 

Another item addressed in the new FDA Guidance Document is the recommendation for performance monitoring for all AI-enabled devices. Performance plans are generally not needed in 510(k) submissions for Class II devices. However, the FDA is encouraging this for all AI-enabled device premarket submissions as a part of risk management. This type of monitoring aims to provide evidence that the device is producing accurate and reliable results throughout its lifetime on the market. The hope is this additional requirement will avoid concerns for safety unique to AI-enabled devices, such as data-drift or biased data skewing the expected output.

Most Class II FDA-cleared medical devices on the market are for use with imaging diagnostics but have specific language in their labeling that they aren’t diagnostic devices and the final diagnosis must be done by the medical professional. 

Medical professionals are to use the device as an assist to their own diagnosis. Due to the human-machine relationship of these devices in the field, the question becomes how manufacturers are receiving real-world use data. Medical professionals are not all keeping a log of when their own diagnosis agrees or disagrees with the software. 

However, this information could provide critical feedback in improving the AI algorithm for future versions of the device. Manufacturers should begin to consider how to best collect, analyze, and incorporate this real-world feedback into the total lifecycle of their devices. 

As AI becomes available in more types of devices and in more diverse locations, it will require a team effort to obtain positive results. Manufacturers must do their part in ensuring these devices are safe and effective both before and after hitting the market. The FDA is working to keep up with fast-paced industry standards to provide the best guidance on how manufacturers should achieve this. 

Medical schools, training programs, and medical professionals themselves must do their part to ensure there’s a good balance between using the information from AI-enabled devices without relying on them too much. Medical professionals must be trained so they too can understand the potential dangers of AI-enabled devices and stay vigilant when using these devices to continue ensuring patient safety.

Everyone needs to do their part to help foster a community in which we can maintain manual skills, while still incorporating innovative new technologies to prevent a “use it or lose it” scenario.


MORE FROM THIS AUTHOR: How to Hit the Moving Target of Medical Device Cybersecurity


Hannah Taggart is a forward-thinking biomedical engineer and regulatory specialist with Empirical Technologies who is helping to navigate clients through the complex regulatory landscape to provide innovative and compliant medical devices for their patients.

Keep Up With Our Content. Subscribe To Medical Product Outsourcing Newsletters