< Back

AI in health care: Building patient trust

A timely and comprehensive overview of the use of AI in health care delivered by a US physician expert to the Canadian Medical Association 2025 Health Summit held recently in Ottawa stressed the need to build patient trust.

“Sometimes in our excitement for new technologies, we overlook the critical importance of building and maintaining human trust,” said Dr. Daniel Yang, an internist and VP of AI and New Technology at Kaiser Permanente. And given plans to make AI scribes available nationwide to help physicians take clinical notes, it was timely that Dr. Yang addressed this specific AI tool at some length.

Dr. Yang said AI has the potential to help healthcare systems deal with the fundamental need to meet increasing demands. “We’re not going to train our way out of that problem,” he said because of the sheer scope of the problem and the limited ability to produce enough physicians and other healthcare professionals.

“How do we leverage AI to unlock the largest healthcare workforce in Canada and the United States, which are patients themselves,” Dr. Yang asked. Patient prefer managing their own health care, he said, so the issue is how to use AI to enable this self-management of care “with appropriate guardrails.” Secondly, he asked “how might we leverage AI to move to a one-to-many model of care, where one clinician can take care of many patients simultaneously while still maintaining quality of care and even increasing personalization of care.”

“Not deploying AI … is accepting that status quo – “physician burnout, access delays, (increasing) costs of care – which I think many of us would believe to be unacceptable.” However, Dr. Yang talked about the current “public trust gap” for using AI in health care.

A survey a few years ago showed 60% of Americans would not feel comfortable if their doctor relied on AI for diagnosis and treatment, he said, despite their high comfort level with using AI in other areas of their lives and despite being extremely comfortable with trusting the health care system in other areas such as anesthesia. The core reason, he said is “we haven’t given the public a reason to trust us.”

One manifestation of this lack of trust, he said, is the explosion of policies and regulations dealing with AI in health care in the US. “Today’s stream of regulations often come directly from real life examples of how AI may have been used in ways that could harm patients or magnify mistrust.”

Real world evidence of the value of using AI in health care is “very scarce”, he said “and so the more that we can demonstrate the real-world impact of these (AI) tools through rigorous clinical trials, I believe that the public will trust these tools.” In addition, he said, the health care system has yet to build an infrastructure of safety around the use of AI.

Dr. Yang told the audience that Kaiser Permanente has provided ambient AI scribe technology to more than 25,000 physicians. “The only reason we succeeded in deploying this technology responsibly and at scale was because we invested the time, the energy, the resources that this project deserved. We treated our deployment to this AI technology like you would treat a go live for an electronic health record system.”

By focusing on deploying one AI tool in the system rather than “sprinkling AI everywhere”, Dr. Yang said, Kaiser Permanente was able to build organizational trust. “It takes time, and you need to do it slowly.” Another issue, he said, is the critical need to redesign workflow to accommodate new tools.

By successfully deploying ambient AI scribes to their physicians, he said, Kaiser Permanente has been able to “liberate physicians from their keyboards.” In addition, Dr. Yang said “while we initially deployed the technology to support our physicians to reduce administrative burden what really surprised us was how much the patients loved the tool. Not only did the patients feel that their doctors were more attentive during the visit, they also felt that it created a greater sense of transparency into the visit.”

However, Dr. Yang said it is wrong to focus on the increased productivity that AI scribes can bring to medical practice because studies have shown their use does not save huge amounts of time even though physicians perceive that they do. “There is time being saved but it’s on the order of minutes, not hours,” he said. The productivity paradox means that there is a 10-to-15-year lag from the introduction of a new technology to seeing productivity gains statistically, “and the reason behind that is because you have to completely redesign workflows around that new technology.”

Asked about the risks inherent in using AI for diagnosis and treatment, Dr. Yang said, developers as well as clinicians need to take some responsibility. It’s not fair that physicians alone should add to their cognitive burden by being the ones solely responsible for detecting AI “hallucinations” or errors. “Vendors need to make these tools easier for us as humans to provide effective oversight and efficient oversight instead of reading every word in an AI generated note.”

 

Reposted with permission from author Pat Rich. Learn more about the Patient Voice at e-Health25 here.

STAY UP-TO-DATE on conference news

Sign up for e-Health email updates

SIGN UP