fbpx
Quick summary: Brian W Young, MD, MBA discusses resolving the trust gap for artificial intelligence in medicine.

Trust is tough these days.

What with all the fake and deep fake…everything.

If there ever was wonder whether we need it, recent global events (pandemic, wars, elections and otherwise) emphasize that trust is the glue for lots of things.

We trust in health care, and even sickcare, to deliver on several axes–among them is having timely (early if possible) awareness of the futurescape risks to our wellbeing. Secondly, and related to managing risk, is optimizing healthy years. And, superseding those are both safety and efficacy—the guiding windage for interventions, recommendations, and best practices since the Hippocratic Oath: do moral and ethical things that help, but in all cases, do no harm.

 

There are many definitions and characterizations of trust and its elemental constructs.

To help motivate the discussion of the trust gap in artificial intelligence (AI) in medicine here, this straightforward ontology of trust from a recent HBR article cites three elements of trust:

  • Relationships-most important
  • Judgement
  • Consistency

Good to go? That all we need? Uh, nope.  Those are for people trust…with humans on both sides of the trust transaction.

Trust, as a singular concept, belies and conflates a lot of nuanced constructs on the path to build to the final element of Trust. At its most intimate, trust is an artifact of human belief and experience—you cannot discern the world, dividing it into “fight-or-flight”, or, lowering your shields, without the notion of trust. Nowhere, however, in the standard trust equation is technology; much less artificial intelligence.

However, some crosswalks may ease the tech angst. Technology, like humans, has a lifecycle. Reflecting upon one popular lifecycle pattern for novel technologies (the Gartner Hype cycle),  is it the case that the “slope of enlightenment” is simply, or in part, either the nascent rise, or the return, of trust in any technology of interest? That which follows on prior early miscrafted attempts to understand and use a particular technology?

 

Enter the seemingly inexorable need to trust technology; AI in health care specifically, in its many forms, because we, our brains—our vigilance and surveillance—can’t be everywhere, all the time intelligencing.

 

Indeed if our daily experience is divided into low-level tasks running in the background amid things requiring our focus and attention, we don’t even really want to be everywhere, all the time.

System 2 intellectual tasks are exhausting and system 1 processes (I prefer calling recurring low-level tasks repetitive rather than robotic processes)—once they summate beyond a certain critical mass, as they have in health care workflows—threaten to take up all the “extra” focused creative, analytical, and diagnostic thinking time.

 

So, apparently, trust in AI technology, an artificial (not human) trust, may forceably disaggregate the relationship element from among the elements of trust.

But it has to go somewhere, right?

That comforting energy of a trust relationship with anything we hold close.

It seems natural for it to suffuse the physician-patient relationship where we already trust our doctors with stuff we don’t know about completely—what medication is best, what test is needed, what treatments are safe and have efficacy.

Will we then trust our physician, with AI as well, to proxy our AI trust or perform as our algorithm fiduciary?

Maybe…but doctors are not trained in AI, and are still struggling with the advent of electronic medical record technology in workflow.

EMRs, sadly, may have torched (if not burned outright) a bridge to AI in the near and medium-term.

 

Nonetheless, a trust triad is created.

This is where our trust as clinicians, acting on behalf of and along with patients, anchors the adjudication of the AI solution’s judgement and consistency—the other two elements— for whatever AI solution is being weighed for its accretive contribution to: awareness, intelligence, wisdom, and decision-making (noted: some AI will have to be trusted directly by patients and evolve from a social type of crowd trust, i.e., adoption and vetting by digital natives).

 

EMRs, sadly, may have torched (if not burned outright) a bridge to AI in the near and medium-term.

 

Judgement is already baked in to model performance measures (confusion matrix, AUC, PPV, fire rates, calibration graphs).

And consistency can be addressed with assessments of model drift and data drift. AI can be reasonably be judged in a fit-for-purpose manner, therefore, along the remaining two elements of trust.

 

Worthy of note, we already trust a lot of technology.

We trust the robotic surgical systems now in use not to make a rogue or unintended movement that is not guided by the surgeon.

We trust the ICU ventilator to deliver every breath, no hiccups or pauses.

 

Worthy of note, we already trust a lot of technology. We trust the robotic surgical systems now in use not to make a rogue or unintended movement that is not guided by the surgeon. We trust the ICU ventilator to deliver every breath, no… Click To Tweet

 

It is unlikely that every doctor or every patient understands the inner workings and programming code that controls those technologies; thus creating a “black boxedness” for those technologies, too—and those are already adopted, somehow having vaulted the trust threshold.

Arguably one of the best “measures” of technology is when trust escalates to a level where there is suppression of awareness that technology is in use: it fades into the background of the urgency or desire to achieve a result: to do something.

 

Arguably one of the best “measures” of technology is when trust escalates to a level where there is suppression of awareness that technology is in use: it fades into the background of the urgency or desire to achieve a result: to do… Click To Tweet

 

Will we soon arrive at that level of trust?

The trust gap annealed?

Thought leaders have been saying for some time that the best approach will require a culture shift.

That trust and adoption of AI in medicine best evolves not from startup founders whose origin story includes some family member harmed or disadvantaged by one of healthcare’s many fractious problems, but from a systems design approach—and current healthcare is a complex system; a complex socio-technical environment.

 

trust and adoption of AI in medicine best evolves not from startup founders whose origin story includes some family member harmed or disadvantaged by one of healthcare’s many fractious problems, but from a systems design approach Click To Tweet

 

The necessary shift is that AI must meaningfully integrate into the longitudinal care of patients—anything less is just another narrow-scope point solution.

To accomplish that, physicians have to be included and involved early and continuously in the design process.

Sadly, that is not the current culture of AI development. If it were, physicians could incrementally develop trust in AI solutions, and see along the way, how the AI is built and how it yields insights.

There would be no, or less, “black boxedness” as cited by the national coordinator for health IT.

That culture shift should ideally be deliberate, careful, and diligent. Otherwise, we might end up with things like autonomously driving…bicycles.

Tweet this out

EARN CME

This learning experience is powered by CMEfy - a platform that brings relevant CMEs to busy clinicians, at the right place and right time. Using short learning nudges, clinicians can reflect and unlock AMA PRA Category 1 Credit.

Ad from SoMeDocs.

SoMeDocs Front Page Header

Marketing physician voices uniquely!

Our Venture Amplifies Healthcare Voices.

Ad from SoMeDocs.

SoMeDocs Front Page Header

Marketing physician voices uniquely!

Our Venture Amplifies Healthcare Voices.

Brian W Young, MD, MBA

Artificially intelligent

SoMeDocs

SoMeDocs, short for Doctors on Social Media, is a physician-created & led health media company that aims to build a beautiful catalogue of verified online healthcare voices. Our goals are to teach educated professionals tools for personal success, and to showcase them to the world, and facilitate the connections needed to grow. Join us.

Negotiation series header: David Norris

Negotiate as a Physician and Win

Catch this 8-part series, hosted by physician & business consultant David Norris, MD, MBA & produced by Dana Corriel, MD. Learn to be a stronger negotiator with these important tactics.

Conversations with Shem: Season 2

Medical literature icon Samuel Shem, author of “The House of God” returns for season 2 of conversation, in order to discuss the broken healthcare system. This time, he’s brought the guests!

Doctors on Social Media Teach Podcasting Header Image

Doctors On Social Media Teach Podcasting

Today’s health experts are sharing their expertise in audio format using podcasts. Join us as we explore how we do this and bring on the innovators who are giving it their all.

George Mathew, MD, MBA

George Mathew, MD, MBA

Trying to learn as much as I can about healthcare and the business of healthcare, to try to create access to care for all patients

Roberata E Gebhard D.O.

Roberata E Gebhard D.O.

I am passionate about Gender Equity in Medicine, and I help physicians who have experienced workplace injustice!

Heather Signorelli, DO

Heather Signorelli, DO

Physician executive and entrepreneur on a mission to help physicians through a reliable medical billing service.

Mimi Zieman M.D.

Mimi Zieman M.D.

We all have inner voices that need to be listened to, and stories to tell. Voices speaking up for women and justice are needed now more than ever.

Meridith Grundei

Meridith Grundei

Perfection is highly overrated. It’s time to start getting comfortably uncomfortable and start sharing your voice with the world!

JD Gershbein

JD Gershbein

“Linkedin is like a raffle; you must be present to win.”

Ann M. Richardson, MBA

Ann M. Richardson, MBA

“The Doctor Whisperer” – Healthcare systems transformation consultant and fierce physician, care team, and patient advocate.

SoMeDocs Logo

The Healthcare Connection Hub

Disclaimer: SoMeDocs assumes no responsibility for the accuracy, claims, or content of the individual experts' profiles, contributions and courses. Details within posts cannot be verified. This site does not represent medical advice and you should always consult with your private physician before taking on anything you read online. See SoMeDocs' Terms of Use for more information.

Grow with us.

We take rolling applications for regular contributors

We had a fantastic turnout and brought a large number of physician contributors on board our 1st & 2nd rounds. If you’re interested in being considered for a future round, submit an application now and we’ll be in touch when it opens. Regularly contributing means you share your thoughts, stories, opinions, or advice on our website, and we make it pretty/circulate. It’s essentially our large effort to collectively market health experts and grow thought leaders. We also consider applications for our “Experts for Health Experts” section, depending on the pitch. Are you ready to join us? If you prefer immediate access & want to build yourself space now, consider becoming a member.

Play Video
Our Founder Answers Your BURNING Question

SoMeDocs

“Why should I become a member of SoMeDocs if I already have my own space online?”