Project CANAIRI Correspondence Published
What’s the difference between a traditional silent trial and what CANAIRI is calling ‘translational trials’?
We’ve heard from our consumers that the word ‘silent’ seems secretive and doesn’t encourage trust.
‘Silent’ trials have referred to the prospective, technical validation of an ML model in its intended use setting. They’re ‘silent’ because the model outputs are not affecting care. But when you look at what’s happening in the literature, it seems as though they’re not always ‘silent’ in this sense.
Another point is that CANAIRI is not trying to reinvent what a traditional silent trial does - we’re trying to expand what we do during this particular phase of translation. ‘Translational trials’ are expressly this - they’re an evaluative component of the translation pathway. When the goal is to make something that is clinically useful, you need to make sure that you evaluate that something in a relevant way. It’s not just testing for the sake of testing - there is a normative component.
A translational trial is the set of evaluative and normative practices that should be undertaken to facilitate responsible translation of AI tools.
So, while typical silent phase evaluations have focused on:
Testing model performance prospectively: measuring the model’s accuracy, true/false positive/negatives, errors, and sometimes assessment of bias
Issues of integration (where and how data moves from the medical record to the AI model and back) are often encountered at the end of the testing period and sometimes mean having to go back and change the model
A translational trial might measure:
The model: testing model performance prospectively by measuring the model’s accuracy, true/false positive/negatives, and errors
Bias and equity: measuring model performance and outcomes across many groups of patients
Human-computer interaction: minimizing risks of model misuse by carefully designing how the model predictions appear to clinicians
Ethics: considering the larger problem formulation, choice of design parameters, considerations for vulnerable groups, consent issues, and transparency
Integration, human factors: mapping the data flow and workflow of the clinician to consider where the model predictions will fit, what information is present at the time, and how the prediction would be actioned
Cybersecurity and IT: ensuring data protections, maintaining data integrity, having a plan for potential down-time
Environmental: measuring the energy demands of the system
Patients and public: bringing patients in early in the translation process to incorporate their perspectives, identify potential risks, and develop an integration plan
We are working with the global community to put together the practices that should form the translational trial, and provide some guidance on when you might want to do each one and how.
This way, translational trials can be the ‘canary in the coal mine,’ signaling safety or danger to help us more responsibly integrate or take offline AI tools. They allow us to build an empirically-grounded risk assessment and identify risk mitigation efforts based on the real-world context. And by focusing on the whole package of translation rather than individual pieces one after the other, we think we can streamline translation.
We are still in the early stages of the developing story of AI integration in healthcare. The community continues to evolve in testing and verifying AI tool performance. CANAIRI’s goal is to develop guidance to help ensure that any and every health setting that wishes to use AI tools is prepared and competent, ensuring it will work for their patients and health staff in their local context.
We also want to develop resources for consumers so that the silent trial is not so ‘silent’ anymore. These resources should improve health AI literacy, equip patients with questions they can ask, and help people feel confident to ask about AI and to make sure that AI-enabled care still fits with their values. As we progress, we will provide opportunities for the public to engage with our work and provide continual feedback so that the outcome of CANAIRI reflects a vision for accountable AI integration that is trustworthy and accessible to all.