Regulatory Perspectives

Meeting the New Standard for Safety Data Quality

Manufacturers must evolve from retrospective data collection to a more proactive, intelligence-driven approach.

Author Image

By: Deepanshu Saini

Director of Program Management, IQVIA

Photo: TStudious/Shutterstock

A fundamental shift is underway in the medtech industry. Driven by a broader push for transparency and accountability, this shift demands that clinical and real-world data provide a clear, defensible picture of safety performance throughout the product lifecycle. This change in mindset moves past the volume of data collected and focuses on the quality of the evidence submitted. 

Today’s medical device manufacturers are tasked with navigating more complex reporting environments as well as expanding data channels that are reshaping how adverse events are detected and documented. To satisfy regulators’ rising expectations, many organizations are adopting artificial intelligence (AI) to strengthen their ability to capture signals, consolidate fragmented inputs, and ensure that submitted safety data supports informed regulatory decisions.

This new environment presents an opportunity to modernize safety processes across development, post-market surveillance, and customer support operations. It also calls for a renewed focus on the data practices that underpin regulatory interactions.

Clear, Complete Regulation

Over the past decade, safety report standards have steadily expanded. Published in early 2025, the U.S. Food and Drug Administration’s (FDA) draft guidance on the use of AI in drug and biological product submissions articulates the importance of methodological transparency, traceable evidence and validation of AI systems used to support regulatory decisions. While this guidance is most relevant for therapeutics, the same principles align with expectations for device manufacturers. 

Incomplete or inconsistent inputs, gaps in trial design, unreported malfunctions, or missing follow-up information will prevent FDA reviewers from understanding the full risk profile of a medical device under scrutiny. Without a comprehensive overview, medical device organizations will see their submissions stall, or even worse, fail.  

As new digital features, sensors, and software are added to medical products, the range of potential reportable events grows. These events often surface in unstructured environments, including call center transcripts, support programs, and publicly available online content. Regulators are making it abundantly clear that organizations should prioritize capturing and consolidating all relevant information to meet the standards that define the current safety landscape.

Detecting Subtle Signals Across Disconnected Channels

The channels in which medical device manufacturers receive safety signals extend far beyond traditional clinical documentation. Sources can include patient narratives that appear in natural language during support calls or in digital forums, whereas clinicians may describe device performance issues in jargon that does not align with predefined reporting categories. These differences in event descriptions can lead to subtle malfunction patterns going undetected without systematic analysis of heterogeneous data.

Advanced systems can analyze large volumes of unstructured content and extract medically relevant details that contribute to more complete safety assessments. This includes the detection of medication errors, device performance concerns, or lack-of-efficacy indicators communicated informally by users. AI systems can also distinguish between benign statements and clinically meaningful events by interpreting the contextual cues within a narrative.

Compared to the antiquated approach of reviewing each transcript, document, or recording manually, AI-empowered safety teams can work from structured information that reflects a comprehensive view of each reported experience. Without the risk of overlooking issues, teams can consistently move forward with more comprehensive submissions.

Improving Safety Records Quality and Transparency

A central element of public trust is ensuring that medical technology reports are both accurate and transparent. When adverse events go undetected or are inconsistently documented, a product’s perception, patient confidence, and clinical decision-making can all be negatively impacted. AI-driven consolidation brings together signals that originate across separate systems, creating a more unified safety record and allowing organizations to gain a clearer understanding of real-world product performance. 

From this vantage point, safety teams can utilize their expertise to pinpoint trends earlier and evaluate whether additional testing or corrective actions are needed, as well as engage in more meaningful conversations with regulators. 

Human Expertise is Key to Safety Judgments

While there is no debate about AI’s strength to process vast amounts of information, there are still concerns about the technology’s interpretation of the data it reviews. The core response to these concerns is to continue to include a human touch in the form of a seasoned expert. Dedicated safety professionals should be responsible for interpreting ambiguous cases, evaluating clinical nuance, and determining whether a signal meets reporting requirements. AI can identify a potential device malfunction
or an anomalous pattern, but only a trained professional can determine its regulatory significance.

To ensure that AI systems are operating within their intended scope and capabilities, strong oversight must be consistent. Because commercially available AI and large language models may not be capable of handling device-specific terminology or the latest clinical literature, validation and monitoring are critical components of any implementation. Successful AI adoption programs will establish clear audit trails, robust quality controls, and transparency into the ways in which outputs are generated.

Advancing Toward a More Proactive Safety Model

Regulatory and safety expectations will surely continue to increase. This means manufacturers must evolve from retrospective data collection to a more proactive, intelligence-driven approach. Utilizing AI, organizations can detect safety issues earlier, reduce the risk of incomplete regulatory submissions, and strengthen the rigor of post-market surveillance. These advances support the shared goal of improving patient safety across the lifecycle of medical devices.


Deepanshu Saini is a global Pharmacovilgilance (PV) leader with more than 12 years of experience driving AI-powered transformation across drug safety and life sciences. As director of Program Management at IQVIA, he leads the delivery of Vigilance Detect, a global practice that helps top pharmaceutical companies modernize safety operations through advanced AI/ML technologies, automation, and digital workflows. He has successfully led client projects in the areas of PV remediation, PV automation, digital data research, and digital governance. He has authored multiple white papers examining the use of online patient data to monitor real world outcomes and study treatment patterns.
       A recognized thought leader on the application of AI and Natural Language Processing in drug safety, Saini has been published in leading journals and industry media, including the Journal of Medical Internet Research, Pharmaphorum, and Drug Discovery & Development. Known for his focus on client impact, transformation and people development, Saini is a trusted advisor to senior stakeholders and a passionate mentor to cross-functional global teams. He is also a strong advocate for diversity, inclusion and belonging in the workplace. Saini earned an MBA from the Indian School of Business in Hyderabad, India and a bachelor of engineering degree from the Birla Institute of Technology and Sciences (BITS) in Pilani, India.

Keep Up With Our Content. Subscribe To Medical Product Outsourcing Newsletters