Project CANAIRI Correspondence Published
The Project CANAIRI Team met with 7 youth recruited through an email list for people who had consented to hear about AI research at The Hospital for Sick Children.
Topics included: silent and translational trials – what they are, what project CANAIRI is proposing, and CANAIRI’s goals; two use cases; what’s most important for translational trial evaluations.
Youth readily recognized the value that data offers for personalized care. They reported a lot of skepticism about how private companies currently manage their data in the form of personal information and expressed the lack of choice that they see around engaging with digital tools. They also report some level of disbelief when companies insist they are keeping data private. Youth agreed that trustworthy factors included control by a medical or university research team without profit motives, and data that stayed locally in the health record. They wanted to know whether there are contingency plans for dealing with potential breaches, hacks, or other situations where their personal information is revealed. There is an expectation that because it is health information, it should remain private and not be used in ways that private companies are going.
A core value that was advocated broadly was respect for people’s information by ensuring choices about data sharing are respected. Youth supported a ‘just ask’ model, where their permission was sought for access to their health data. This practice is not a matter of risk mitigation, but of respect for the patient and the sensitivity of the information.
We are reminded of one parent’s words, “To you that’s just a scan. To me, that’s the worst day of my life.”
The notion of safeguards came up repeatedly, for two reasons: (1) the recognition that algorithms “can’t know everything” and (2) we are still learning about AI, which can be changing, evolving, and sometimes unpredictable. Youth highlighted the importance of testing first against the human standard, and asked “how do you know you’ve tested enough, that it’s good enough?” They mentioned reliability and accuracy, and whether it works on everyone (for example, dictation tools working on different accents). We discussed how there are not specific standards on this point, and how some of the larger regulatory structures that set these standards don’t get applied to a lot of AI tools in healthcare. Youth reinforced the importance of human judgment as a safeguard, where the clinician who is responsible for their care has their best interests at heart would decide. At the same time, others raised that humans aren’t perfect, and technologies can be better than us at specific things so we need to balance the evidence. They rejected the idea of using AI for the sake of AI, on the grounds of efficiency (“if it takes them longer, why not just use their brains?”) and environmental impact (“if the environmental impact is huge, it might not be worth it”).
Theme 3: Personalization and Lack of Representativeness
Youth strongly supported a vision of personalization and equity as fundamentally intertwined; we must see people as individuals, which includes markers of identity. Many highlighted underrepresentation as a core problem in medicine, research, and society at large, and recognized that AI would replicate this problem. They endorsed the idea that some AI tools should use variables like race or ethnicity and gender to personalize care.
Youth reported feeling under-prepared and under-supported for navigating AI in healthcare; nonetheless, they felt they were better prepared than their parents were for the same. They advocated for mandatory education, with the government taking a bigger role in preparing citizens by delivering education. They discussed the use of multiple modalities to deliver education, meeting people where they seek information rather than outdated means of communication. Regarding representation and bias, youth offered that you could have a website which describes the population the AI tool has been tested with to see the diversity. There was a strong consensus that patients should be informed when AI is being used as part of their care.
When we explored ideas about responsibility, youth discussed different kinds of responsibilities resting with different roles. Doctors are responsible for patient care decisions, but hospitals and AI developers shared responsibility as well for making sure AI tools were safe and effective. They understood that there are financial costs for technologies, which they felt should be absorbed when the benefit is great enough and so this factor alone should not determine whether an AI tool is integrated or not. They grounded accountability in the impact the tool has, whether to patients or to clinicians. Youth wanted to be assured the AI tool had “additive value” in some regard, and justified in proportion to the environmental impact. When asked to weigh different aspects of evaluating AI (efficiency and accuracy, human-computer interaction, fairness, environmental impact, and financial cost), they favoured human-computer interaction the most. They also stressed the environmental aspects as being extremely important to determining the ‘net benefit’ of the AI tool in addition to the clinicians’ experience with it and whether it helps patients.
1.We're advocating for fairness as a key component of translational trials
3. We're exploring ideas around 'value' and how this is viewed by different stakeholders
2. We're advocating for transparency around the conduct of translational trials
4. We're asking CANAIRI participants about how they value aspects like environmental impact
5. We're holding another workshop to co-design knowledge resources to help patients feel empowered to ask questions