The importance of trustworthy AI in clinical trials

In this SelectScience interview, discover why AI in clinical data management must prioritize transparency, traceability, and reproducibility to earn trust in regulated environments

20 Feb 2026
Dr. Simone Sharma, Lead Clinical Product Manager at Revvity Signals

Simone Sharma, Ph.D., Lead Clinical Product Manager at Revvity Signals discusses the importance of trustworthy AI in clinical trials

AI capabilities are rapidly advancing, and with them clinical development teams face a critical question: not what AI can do, but what AI can do reliably, transparently, and safely?

Over the past few years, Dr. Simone Sharma, Lead Clinical Product Manager at Revvity Signals, has witnessed a shift from experimentation to operational deployment of AI-enabled solutions in clinical research organizations around the globe.

"The expectations have really shifted from experimentation mode to displaying operational readiness," shares Sharma. "Today, organizations expect more from AI, and rightly so, especially if they're going to include these technologies into their workflows. They need AI to be reliable and consistent, and they need to follow the compliance patterns, just like any other regulated system."

While some industries may be able to rely on rapid technology iteration, clinical development is reliant on a model with zero margin for error. As Sharma puts it, "Moving fast and breaking things isn’t a plausible strategy in clinical development. It's actually a liability."

What clinical development teams really prioritize

There are many considerations to look at when evaluating AI-enabled applications for clinical trial data analytics. While it may be tempting to focus on efficiency, Sharma emphasizes that using speed alone as an assessment marker is insufficient. "In practice, reproducibility and risk reduction tend to ultimately lead to decision-making, with transparency following close behind," she explains. "Clinical teams want confidence that the results they are getting can be reproduced, they can be explained, and they can be defended."

Critically, without reproducibility, risk reduction and transparency – impressive time savings become meaningless. "Solutions that save time, but then introduce uncertainty, or any risk to the compliance framework are generally not going to be viewed as viable," states Sharma.

The cost of unreliability in clinical study oversight

"Clinical development, by its very nature, operates with very low tolerance for error; because patient safety is at risk," emphasizes Sharma. Any AI output that informs safety assessments, regulatory submissions, or trial decisions affecting patients must provide consistent and reproducible results. Not doing so could throw your whole study into jeopardy.

"Failure could mean your trial could be stopped midway, and that's after you've spent millions of dollars and a lot of effort," warns Sharma.

The dangers of general-purpose AI

Teams must also consider that loss of control is a primary risk. "Without traceability, clinical teams can't really understand how an output was generated or whether it can be reproduced," Sharma explains.

In the case of conversational AI models, there is a real risk of plausible sounding but incorrect outputs. "Things sound smart, but there is no real evidence for it," she cautions. "We need to always know what the input is and what the output is going to be. That's why general purpose or conversational AI models won't work."

Signals Clinical logo

Signals Clinical is a modern SaaS solution from Revvity Signals, designed to streamline clinical data management and analytics.

This has led Sharma to advocate for purpose-built AI agents in Signals Clinical, Revvity Signals' clinical data review solution, which unifies data review, querying and monitoring into one intuitive solution for data managers and medical monitors. The clinical trial data analytics environment is designed for studies of all sizes and stages, enabling earlier risk detection, streamlined reviews, and unlimited data insights to improve trial outcomes.

Humans in command, not just in the loop

Trustworthy AI, such as deployed in Signals Clinical, used in clinical trials must extend beyond the concept of ‘human in the loop’. As Sharma explains, "In regulated environments like ours, you want both humans-in-the-loop in terms of monitoring system performance over time, as well as humans in command with clear authority to intervene and override or stop AI-driven processes when needed.”

Signals Clinical also meets the requirements of data lineage. "Every result can be traced back to its inputs," says Sharma. "Clinical data managers always need to make sure they know where the result has come from, what it looked like, has it changed, when it changed, who changed it, how many changes it's gone through."

Data-grounded AI prevents hallucinations

In Signals Clinical, strict guardrails mitigate hallucination risk in AI-assisted capabilities. "When you're going to ask questions, you're asking questions specifically of that data," she explains. "To the point, if you ask something innocuous like, what's the weather in Paris? It will not give you a response because we've just blocked it as the question is irrelevant to the data." This positions AI as an assistive layer rather than a decision-making engine.

Rather than generating new content, the AI agent works directly with specific study data to produce deterministic data-grounded outputs. In short, producing outputs by summarizing, filtering or transforming clinical data alone. As Sharma puts it, "We are not in the data-making business."

Finally, Signals Clinical provides a complete audit trail of all data interactions, including when the data came in, who touched the data, when they touched the data, and what action was taken. This is essential for both regulatory submissions and when distributed teams work on data over both distance and time.

Earning lasting trust with AI

Looking ahead, Sharma believes trusted AI solutions will share key characteristics. "The AI solutions that earn trust will be reliable and transparent, reproducible, and well governed," she states.

User experience also plays a crucial role. Steep learning curves currently prevent teams from embracing new solutions, regardless of their technical capabilities. Trust ultimately comes from consistent, proven performance in real-world regulated environments.

"Trust will be driven less by innovation alone, and more by performance in the real world under regulated use," concludes Sharma. "Trust is the AI that works day in and day out, reducing friction without disrupting how organizations operate."

Dr. Simone Sharma

Simone Sharma, PhD, is the Lead Clinical Product Manager at Revvity Signals, where she builds innovative clinical analytics solutions for data access, integration, and advanced analytics in clinical research. Currently, she is pioneering AI innovation in clinical trials, using LLM-powered solutions to transform clinical data review workflows. With nearly a decade of experience in clinical research technology, Sharma previously served as Strategic Lead for Translational Analytics and Senior Data Analytics Application Specialist at Revvity Signals and held genomics application specialist roles at Integromics and UCL. She holds a Ph.D. from UCL and a B.Sc. in Genetics from the University of Manchester.

Links

Tags