Trust is tough these days.
What with all the fake and deep fake…everything.
If there ever was wonder whether we need it, recent global events (pandemic, wars, elections and otherwise) emphasize that trust is the glue for lots of things.
We trust in health care, and even sickcare, to deliver on several axes–among them is having timely (early if possible) awareness of the futurescape risks to our wellbeing. Secondly, and related to managing risk, is optimizing healthy years. And, superseding those are both safety and efficacy—the guiding windage for interventions, recommendations, and best practices since the Hippocratic Oath: do moral and ethical things that help, but in all cases, do no harm.
There are many definitions and characterizations of trust and its elemental constructs.
To help motivate the discussion of the trust gap in artificial intelligence (AI) in medicine here, this straightforward ontology of trust from a recent HBR article cites three elements of trust:
- Relationships-most important
- Judgement
- Consistency
Good to go? That all we need? Uh, nope. Those are for people trust…with humans on both sides of the trust transaction.
Trust, as a singular concept, belies and conflates a lot of nuanced constructs on the path to build to the final element of Trust. At its most intimate, trust is an artifact of human belief and experience—you cannot discern the world, dividing it into “fight-or-flight”, or, lowering your shields, without the notion of trust. Nowhere, however, in the standard trust equation is technology; much less artificial intelligence.
However, some crosswalks may ease the tech angst. Technology, like humans, has a lifecycle. Reflecting upon one popular lifecycle pattern for novel technologies (the Gartner Hype cycle), is it the case that the “slope of enlightenment” is simply, or in part, either the nascent rise, or the return, of trust in any technology of interest? That which follows on prior early miscrafted attempts to understand and use a particular technology?
Enter the seemingly inexorable need to trust technology; AI in health care specifically, in its many forms, because we, our brains—our vigilance and surveillance—can’t be everywhere, all the time intelligencing.
Indeed if our daily experience is divided into low-level tasks running in the background amid things requiring our focus and attention, we don’t even really want to be everywhere, all the time.
System 2 intellectual tasks are exhausting and system 1 processes (I prefer calling recurring low-level tasks repetitive rather than robotic processes)—once they summate beyond a certain critical mass, as they have in health care workflows—threaten to take up all the “extra” focused creative, analytical, and diagnostic thinking time.
So, apparently, trust in AI technology, an artificial (not human) trust, may forceably disaggregate the relationship element from among the elements of trust.
But it has to go somewhere, right?
That comforting energy of a trust relationship with anything we hold close.
It seems natural for it to suffuse the physician-patient relationship where we already trust our doctors with stuff we don’t know about completely—what medication is best, what test is needed, what treatments are safe and have efficacy.
Will we then trust our physician, with AI as well, to proxy our AI trust or perform as our algorithm fiduciary?
Maybe…but doctors are not trained in AI, and are still struggling with the advent of electronic medical record technology in workflow.
EMRs, sadly, may have torched (if not burned outright) a bridge to AI in the near and medium-term.
Nonetheless, a trust triad is created.
This is where our trust as clinicians, acting on behalf of and along with patients, anchors the adjudication of the AI solution’s judgement and consistency—the other two elements— for whatever AI solution is being weighed for its accretive contribution to: awareness, intelligence, wisdom, and decision-making (noted: some AI will have to be trusted directly by patients and evolve from a social type of crowd trust, i.e., adoption and vetting by digital natives).
EMRs, sadly, may have torched (if not burned outright) a bridge to AI in the near and medium-term.
Judgement is already baked in to model performance measures (confusion matrix, AUC, PPV, fire rates, calibration graphs).
And consistency can be addressed with assessments of model drift and data drift. AI can be reasonably be judged in a fit-for-purpose manner, therefore, along the remaining two elements of trust.
Worthy of note, we already trust a lot of technology.
We trust the robotic surgical systems now in use not to make a rogue or unintended movement that is not guided by the surgeon.
We trust the ICU ventilator to deliver every breath, no hiccups or pauses.
Worthy of note, we already trust a lot of technology. We trust the robotic surgical systems now in use not to make a rogue or unintended movement that is not guided by the surgeon. We trust the ICU ventilator to deliver every breath, no… Click To Tweet
It is unlikely that every doctor or every patient understands the inner workings and programming code that controls those technologies; thus creating a “black boxedness” for those technologies, too—and those are already adopted, somehow having vaulted the trust threshold.
Arguably one of the best “measures” of technology is when trust escalates to a level where there is suppression of awareness that technology is in use: it fades into the background of the urgency or desire to achieve a result: to do something.
Arguably one of the best “measures” of technology is when trust escalates to a level where there is suppression of awareness that technology is in use: it fades into the background of the urgency or desire to achieve a result: to do… Click To Tweet
Will we soon arrive at that level of trust?
The trust gap annealed?
Thought leaders have been saying for some time that the best approach will require a culture shift.
That trust and adoption of AI in medicine best evolves not from startup founders whose origin story includes some family member harmed or disadvantaged by one of healthcare’s many fractious problems, but from a systems design approach—and current healthcare is a complex system; a complex socio-technical environment.
trust and adoption of AI in medicine best evolves not from startup founders whose origin story includes some family member harmed or disadvantaged by one of healthcare’s many fractious problems, but from a systems design approach Click To Tweet
The necessary shift is that AI must meaningfully integrate into the longitudinal care of patients—anything less is just another narrow-scope point solution.
To accomplish that, physicians have to be included and involved early and continuously in the design process.
Sadly, that is not the current culture of AI development. If it were, physicians could incrementally develop trust in AI solutions, and see along the way, how the AI is built and how it yields insights.
There would be no, or less, “black boxedness” as cited by the national coordinator for health IT.
That culture shift should ideally be deliberate, careful, and diligent. Otherwise, we might end up with things like autonomously driving…bicycles.