Dollars & $ense

Domesticating the Healthtech AI Beast

This year, discussions are revolving around healthtech AI best practices, oversight, and regulation.

Author Image

By: Andrew (A.J.) Tibbetts

Shareholder, Intellectual Property & Technology Practice Group, Greenberg Traurig LLP

Decades have passed since the U.S. Food and Drug Administration (FDA) first approved a healthcare-related artificial intelligence (AI) algorithm, and several more have elapsed since the first National Institutes of Health-sponsored work on AI. Yet more than ever, healthtech AI has now commanded the public’s and industry’s attention.

AI developers have long sought to compile high quality data to “feed the beast” and train reliable algorithms, and to trump one another on accuracy and breadth of use cases. But a multi-prong effort is currently in full swing to tame that beast and ensure AI’s safety and efficacy for patient populations. Developers, investors, purchasers, and users of AI tools in healthcare and life sciences should be familiar with these ongoing efforts and accommodate these policies and frameworks in long-range planning to ensure the continued safe and successful operation of their products and services.

A comprehensive review of proposed guidance (including regulations) might be of limited use given the regulation’s ever-evolving nature. However, surveying sources on the oversight and coverage of general guidance goals (though each might approach it from a slightly different angle) may be helpful. Being mindful of the goals is a good practice, and monitoring the described sources and others for applicable oversight and required practices might actually be considered a commercial necessity.

When discussing oversight and guidelines, thoughts often turn to governmental organizations. These entities have their place, but there is substantial work occurring in the non-governmental AI space. For example, the Coalition for Health AI (CHAI)—a group of providers, payers, tech companies, government agencies, and others formed to create standards for safe and effective health AI—recently released a nearly 200-page standards guide to provide “actionable” guidance for developing and deploying AI in healthcare. It is intended to align with U.S. and other governmental guidance while outlining best practices. The guide covers the AI lifecycle with respect to five themes: (1) usefulness, usability, and efficacy; (2) fairness and equity; (3) safety and reliability; (4) transparency, intelligibility, and accountability; and (5) security and privacy. In addition to covering these topics in depth, it provides six helpful use cases in various provider and payer contexts showing ways to apply the guidance in different circumstances, and checklists for implementation. CHAI previously released other healthcare AI guidance and is striving to become a key reliable voice for best practices in implementing healthcare AI. Thus, it is wise to monitor CHAI publications to keep watch for their recommendations turning into standards, particularly since the group has incorporated government requirements in its recommendations.

Beyond CHAI, the American Medical Association, World Health Organization, and others are rolling out best practice advisories and recommendations that are not tethered to particular governments. These industry-specific advisories may be important to monitor for industry-specific best practices going forward.

While nongovernmental oversight may have perks, it is not the sole voice on this issue. Government organizations have also crafted regulations and requirements for AI systems. The Biden administration’s AI executive order last fall set off a furious scramble at a litany of U.S. government agencies, setting out regulations for various domains. Executive Order 14110 called for U.S. Health and Human Services (HHS) to create by early 2025 a regulatory framework for responsible and trustworthy AI use in medicine, including minimizing bias risk and encouraging proper human oversight. This past spring, HHS also released (per the executive order) a policy governing AI use by state and local governments in administering public health benefits. But HHS’s AI regulatory reach extends beyond the executive order, as evidenced by its spring 2024 rule that makes providers liable for using discriminatory decision support tools (often driven by AI), and “HLT-1” from late 2023 that required AI transparency and publication of important information about models in use. The FDA has also been busy in recent years, publishing multiple rounds of guidance on using software and AI in medicine, including “change control plans” for devices prone to changing configuration with ongoing data access (e.g., machine learning) to ensure these products (with configuration changes) operate safely and effectively. NIST, among other federal agencies, continues to publish best practices and requirements for AI systems, particularly regarding security.


READ MORE: Man & Machine: Artificial Intelligence’s Impact on Medtech


The aforementioned policies only scratch the surface of the federal government’s regulatory proposals and best practice recommendations. There is just as much statutory interest coming from U.S. states. For example, Washington state’s “My Health My Data” law expands consumer data protections to cover the drawing of health conclusions from non-health data.

Colorado’s landmark AI Act lays out requirements for developers and deployers of high-risk systems—especially those used in healthcare—and includes taking steps to reduce discrimination, documenting the intended benefits and uses of a system and limitations on its use, and making other fundamental facts available to deployers of the developed systems. Some states are proposing restrictions on use of AI to replace traditional clinician (e.g., nursing) decisions; regulating use of AI without human oversight; more strictly regulating use of AI in certain health contexts (e.g., mental health); clarify liabilities for use of biased AI; and more.

Jurisdictions outside the United States also are establishing healthcare regulatory frameworks. Under the European Union’s AI Act, artificial intelligence that is used in a Class IIa or higher medical device or that profiles individuals about their health or health insurance is considered high risk and subject to regulation. The AI Act also adds multiple new requirements medtech firms must follow in addition to those specified in the Medical Device Regulation (MDR) and In Vitro Diagnostic Medical Device Regulation (IVDR). As the AI Act takes effect, companies should monitor the regulations and how they align with the existing requirements of MDR/IVDR.

With guidance being published from various sources, device makers must continuously monitor these developments to ensure marketable devices.

Understanding the goals of these guidances may help healthcare providers better recognize the overlapping priorities of each. Generally speaking, the various proposals focus on patient safety. However, safety is a multi-faceted consideration.

One facet of safety is that healthtech AI tools are effective and reliable for their intended use, and generate outputs (diagnoses, recommendations, detected events/deviations, classifications, etc.) that are medically correct and robust. Importantly, medically correct diagnoses must be right for a wide array of patient populations, not just a few. Health equity and reducing/eliminating bias has been a widespread goal in healthtech for years, and continues to be in AI regulation—as noted by the HHS’s liability rule.

An important element of bias involves the inadvertent reinforcement of existing biases, which occurs when a tool’s training reflects unequal conditions in a population and thus causes the tool to incorporate the pre-existing bias in its analysis. Accordingly, while data sufficiency for underrepresented populations is an important part of training a reliable and trustworthy system, the precise nature of that data and the kinds of relationships the AI system is learning from that data must also be well-supervised.

Safety continues with transparency and interpretability. Until recently, discussions focused on AI model explainability—i.e., the ability to determine how inputs cause a particular output based on the model’s mechanics and its analysis. A current target is interpretability, or the human ability to understand cause and effect in the model (how different inputs impact both the model and outputs, and other input-output relationships). A key consideration with interpretability is ensuring a human is positioned to do the interpretation alongside the AI system and the patient. This can help ensure that a human is able to understand the AI system’s work and, when appropriate, change the patient care instructed by the AI system. There are many examples of AI systems yielding incorrect or imprecise analyses for particular patients, and also many examples of clinicians deferring to AI, so ensuring the human is and feels empowered to make that change is important.

AI system security and privacy is also of paramount importance, especially since an AI system is processing health information.

In 2022 and 2023, AI technology commanded the attention of the public and medtech industry, prompting companies to develop deep product expertise. This year, on the other hand, discussions are revolving around healthtech AI best practices, oversight, and regulation. Some pundits expect AI enthusiasm to dampen, but given the technology and its attendant risks are likely here to stay in some form, discussions around AI guidance from various sources may continue into 2025. Developers and deployers of AI systems and their counsel should monitor these efforts to ensure their products will be ready for the market and any scrutiny from regulators or customers’ procurement teams. 


Andrew (A.J.) Tibbetts is a shareholder in the Intellectual Property & Technology practice group in Greenberg Traurig’s Boston office. He leverages his prior experience as a software engineer to provide practical IP strategy counseling on matters related to computer- and electronics-implemented technology across various industries, including in healthtech, life sciences AI, computational biology, medical records analysis/coding, medical devices, and more. He can be reached at andrew.tibbetts@gtlaw.com

Keep Up With Our Content. Subscribe To Medical Product Outsourcing Newsletters