Software Solutions

AI in Clinical Software: Extending ISO 14971 with AAMI Guidance

ISO 14971:2019 models are inadequate when it comes to adaptive, cloud-based and frequently autonomous AI systems.

Photo: PRODPLEUM DESIGN/stock.adobe.com

Artificial intelligence (AI) is redefining the way in which clinical software is developed, implemented, and utilized to achieve more precise diagnostics, optimize workflows and provide real-time patient monitoring. With global market projections1 nearing $200 billion by 2030, this transformation is accelerating.

Yet AI is becoming so deeply baked into medical devices and software (especially those hosted in the cloud), it brings new opportunities while raising new risks. Traditional risk frameworks like ISO 14971:2019 were designed for conventional devices. These models are inadequate when it comes to adaptive, cloud-based and frequently autonomous AI systems. Under these conditions, complexity only increases as companies outsource aspects of their AI ecosystem and divvy up responsibilities with cloud providers, algorithm vendors, and integration partners.

That hole is precisely what a new guidance is filling—AAMI TIR34971:2023 developed by the Association for the Advancement of Medical Instrumentation (AAMI) with the British Standards Institution (BSI), and accepted by FDA, provides much-needed clarity. 

Crucially, the FDA has a 2025 draft guidance on “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations”2 for managing these evolving AI/ML-enabled medical devices, which complements the risk management principles in the AAMI TIR. This report focuses on AI/ML risks and extends them to established medical device risk management methods, as defined by ISO 14971:2019. 

The Emergence of AI-Driven Clinical Platforms 

The pace of AI adoption in the healthcare industry is unparalleled. The worldwide AI in healthcare market expanded from $6.7 billion in 2020 to $22.4 billion by 2023 and a forecast of $208.2 billion by 2030.3

From diagnostics to personalized treatment recommendations and real-time monitoring, AI is revolutionizing care as it is practiced. But the more that adoption is embraced, the more complicated it becomes. 

Pharma and healthcare companies are also turning to external AI help by using third-party developers, cloud infrastructure providers, and systems integrators. This presents new aspects of common liability, attenuated responsibility, and compliance difficulties. 

That is exactly what AAMI TIR34971 does, showing how the ISO 14971 lifecycle can accommodate outsourced AI components, different vendors and owners, and evolving algorithms. The more intelligent and agentic platforms become, the higher the stakes get. A misfired algorithm or cloud downtime is not merely an IT problem but a clinical risk. 

Safeguarding against such risks entails not only updating the industry’s playbook, but adopting guidance tailored to AI. 

Challenges in Overseeing AI-Powered Clinical Software 

The potential of AI also comes with hard-to-ignore pain points: models that can drift over time, vulnerabilities in cloud infrastructure, opaque black-box algorithms, unclear accountability where partners are involved, and the emergence of self-directed or autonomous AI that can have major impacts.

ISO 14971 is heavily geared toward conventional software and was certainly not drafted with this level of agility or autonomy in mind. It doesn’t offer much of a direction for models that get updated after being deployed, or systems whose responsibility chains are fragmented.

This is where AAMI TIR34971 becomes essential. It specifically refers to the particular risks such as data bias, and changes in algorithmic performance and effectiveness which help organizations to bridge hazards in identification of risk, controlling risk, and post market monitoring. 

Real World Examples

Medical device makers have demonstrated the importance of incorporating AI into their products. The Guardian4 covered an AI tool for diagnosis of stroke that reduced decision time from 140 to 79 minutes, while tripling patient recovery rates. In India, deep learning is applied to monitor post-surgical brain swelling, leading to earlier intervention and better results, according to the Times of India.5 At Mass General Brigham,6 an AI copilot cut clinician documentation time by 60%, enabling providers to spend more time with patients and decrease burnout. 

These victories showcase the potential of AI and why robust, AI-specific risk control is so important. Without direction such as that in AAMI TIR34971, organizations run the risk of continuing old-fashioned practices that are no longer trusted by regulators and patients.

Implementation Challenges 

Adapting AI technology is not without challenges: Silos of expertise have to be connected with cross-functional teams comprising clinicians, data scientists, software engineers, and regulatory experts. Black-box models must be enriched with explainability and auditability measures, so every patient-impacting AI decision can be explained, validated, and attributed to the source data.

For instance, real-time risk monitoring necessitates new tools that are built into DevOps and AI Ops pipelines like dashboards, which monitor algorithm performance, automate alerts for model drift/deviation, and detect anomalies across the data inputs.

Outsourcing complicates all this much further. Third-party AI modules, cloud providers, and integration partners have individual risk profiles. Responsibility for risk, maintenance of updates, and response in the event of error or system failure should be clearly set out in contracts. 

In the absence of specific contractual arrangements, companies can still become exposed to liability where AI tools do not perform as intended and that failure is triggered by a third-party vendor or service provider.

The regulatory landscape for adaptive AI is rapidly formalizing. However, international adoption still presents challenges. Healthcare organizations must navigate a patchwork of guidelines in Europe, Canada, the U.S., and elsewhere, some still being developed. Businesses with a global scope are subject to an ever-changing quilt of rules, so TIR34971 acts as a necessary unifying foundation until full international standards can be developed.

Best Practices for Adoption

Where organizations are prepared to act, there are a handful of practical steps that can make AAMI’s guidance actionable today:

  • Upgrade risk registers to account for AI-specific risks such as data bias, drift, and explainability gaps. 
  • Build one-way mirror reactive feed: Deploy the cloud infrastructure with real-world feedback loops. 
  • Fortify supplier contracts with explicit responsibilities for accountability, data sharing, and remediation.
  • Invest in explainable AI (XAI) to increase trust from clinicians and regulators.
  • Provide human-in-the-loop safeguards for monitoring AI outputs with substantial clinical impact. Perform simulations testing to evaluate the performance of AI with rare or at-risk patient populations.
  • Roll out new versions and make every model update traceable. 
  • Create AI governance committees to supervise risk-management, compliance, and ethical use throughout the organization.

These practices do more than ensure compliance; they also establish trust with clinicians who otherwise may be reticent to depend on AI-driven recommendations.

Looking Ahead

Adoption of AI in clinical software is increasing as time progresses. In the next five years, we are likely to see more development of international standards for AI-assisted medical devices, which will cover agentic and full autonomous AIs.

ISO 1497:2019  and AAMI TIR34971 build a bridge to illustrate how organizations can integrate these technologies safely now, while getting ready for more stringent regulatory requirements in the future. Early adopters of these principles stand to gain a two-fold advantage—AI solutions that are safer and more reliable, as well as a reputational one in an increasingly regulated market where what is promised and delivered when it comes to AI and machine learning must be closely examined by regulators, payers, and patients.

Forward-thinking healthcare organizations will probably also invest in AI observability, predictive risk analytics, and cross-institution data-sharing frameworks to remain compliant and drive better outcomes. 

As AI and cloud technology redefine what clinical software can be, our risk frameworks must catch up. ISO 14971 is still the one but AAMI TIR34971 provides a template for how to tailor it for the AI era.

The message is unambiguous: waiting for regulations is not an option. By preemptively incorporating AAMI’s AI-specific recommendations, healthcare organizations can also bridge risk gaps, satisfy regulatory requirements, and maximize the potential of smart clinical systems.

Embedding ethical values into AI risk management provide fairness, transparency, and build patient trust—as much as technical compliance will define long-term success. In this new era, doing more with ISO 14971 through AAMI TIR34971 is not optional or merely best practice—it is critical for safe, scalable, and sustainable innovation.

References

  1. bit.ly/mposoftware11251
  2. bit.ly/mposoftware11252
  3. bit.ly/mposoftware11253
  4. bit.ly/mposoftware11254
  5. bit.ly/mposoftware11255
  6. bit.ly/mposoftware11256

MORE FROM THIS AUTHOR: AI and Point-of-Care Diagnostics Bring Real-Time Testing to Patients


Deepak Borole is a technical project manager at Chetu Inc., a global digital intelligence and software solutions provider, where he oversees general healthcare, including medical devices, remote healthcare, and specialty healthcare.

Keep Up With Our Content. Subscribe To Medical Product Outsourcing Newsletters