OEM News

AI in Healthcare Driving Significant Risks for Patients and IP

Security experts warn the healthcare AI revolution may be on unstable ground.

By: Michael Barbella

Managing Editor

Photo: raker/Shutterstock.

Recent Cybernews S&P 500 research is revealing significant vulnerabilities in healthcare as companies increasingly incorporate artificial intelligence (AI) technology to advance diagnostics and medical discovery.

The (AI) healthcare market—valued at $11 billion in 2021—is projected to be worth $187 billion in 2030, according to Statista. This significant growth shows the industry is likely see substantial changes in the ways in which medical providers, hospitals, pharmaceutical and biotechnology companies, and other organizations in the healthcare industry operate.

However, this rapid and often uncontrolled adoption is creating a perfect storm of risks. A recent study revealed that 83% of doctors believe AI will be a net positive but 70% have serious concerns about using it in the diagnostic process. This concern is justified: A single inaccurate medical algorithm could impact thousands of patients simultaneously, turning an isolated clinical error into a systemic healthcare crisis.

Žilvinas Girėnas, head of product at nexos.ai, claims the risk of systemic failure is heightened by two key issues—data bias and the “black box” nature of many AI models.

“If an algorithm is trained on data that does not represent certain populations well, it won’t just be wrong—it will also reinforce health disparities,” Girenas noted. “Many AI models act as ‘black boxes.’ This means they cannot show clinicians which specific data points, such as a certain lab result or a slight shadow on an X-ray, influenced their conclusion. This puts doctors in a difficult situation. They must either trust an output they cannot verify or miss a potentially life-saving insight.”

Why Healthcare is Vulnerable to AI Risks

AI’s rapid growth has attracted the attention of security experts who warn the healthcare AI revolution may be on unstable ground. A Cybernews analysis of S&P 500 companies revealed a concerning truth: Among 44 major healthcare and pharmaceutical organizations actively using AI, researchers found 149 potential security flaws, with healthcare being one of the top three most vulnerable sectors.

Unlike other industries where AI failures mean lost revenue or damaged reputation, healthcare’s unique challenge is that every vulnerability carries the potential for real human harm.

The research also reveals that healthcare organizations face a particularly dangerous combination of risks—it mentions 28 cases of insecure AI outputs, 24 data leak vulnerabilities, and most critically, 19 direct threats to patient safety where algorithmic errors could scale across entire hospital systems.

The stakes are just as high for intellectual property. With AI-driven drug discovery deals valued at up to $2.9 billion and development timelines often exceeding 10 years, a single leak of proprietary research through an unsecured AI tool could erase a decade of work and billions in future revenue.

“The biggest AI threat in healthcare isn’t a dramatic cyberattack, but rather the quiet, hidden failures that can grow rapidly,” Girėnas said. “These risks include data poisoning, where a tampered dataset subtly disrupts thousands of future diagnoses, and untested algorithms delivering bad recommendations across a whole hospital network. Leaders must ask themselves who is responsible when AI is wrong rather than what happens if AI is wrong. Currently, there’s a serious gap in accountability for algorithms, and without a system to monitor and manage how these tools are used, organizations are putting their patients and valuable information at serious risk.”

Practical Steps Forward

To resolve the tension between rapid innovation and critical risk, healthcare and pharmaceutical organizations must implement a dedicated AI governance layer. This approach goes beyond simply blocking tools, providing organisations with the visibility and control necessary to unlock AI’s potential safely.

According to Girėnas, the future of safe AI in healthcare depends upon three basic principles:

  • Establish an approved list of AI tools. Create a centrally managed “whitelist” of AI models that have been vetted for clinical accuracy and safety. This ensures that clinicians use specialized tools for high-stakes tasks like diagnostics, preventing reliance on unvetted, general-purpose chatbots for patient care.
  • Make data protection automatic. Use technology that automatically finds and removes sensitive information, like patient names, health records, or proprietary research, from any query before it is processed by an AI. Data protection should be treated as a default, not as an afterthought, to ensure compliance and safeguard patient privacy.
  • Ensure every AI action can be traced. Set up a system that logs each AI query and its response, linking each interaction to a specific user and timestamp. This detailed audit trail is essential for reviewing clinical incidents, protecting patient safety, and complying with the strict accountability standards of emerging legal frameworks.

nexos.ai is an AI infrastructure company providing a centralized platform for enterprises to seamlessly integrate and manage multiple AI models. Founded in 2024 by Tomas Okmanas and Eimantas Sabaliauskas, who also co-founded several bootstrapped global ventures, including the $3 billion cybersecurity unicorn Nord Security and Oxylabs, nexos.ai addresses the enterprise need to efficiently deploy, manage, and optimize AI models within organizations. Originating in the ecosystem of Lithuania-based tech accelerator Tesonet, the company attracted its first €8 million investment earlier this year from Index Ventures, Creandum, Dig Ventures, and various prominent angel investors.

Keep Up With Our Content. Subscribe To Medical Product Outsourcing Newsletters