Dozens of presentations took the online stage on Day 2 of CES 2021. Here’s a snapshot of a panel discussion on overcoming the barriers to the growth and acceptance of AI in healthcare.
Doctor Jesse Ehrenfeld, Chair, Board of Trustees, American Medical Association (AMA) summed up the challenge as an issue of trust.
“The AMA has been thinking about AI and its impact on the practice of medicine for several years from a couple of lenses,” he said. “When you think about what’s the rate-limiting factor to innovation and transformation, it’s not the technology; it’s that interface between the human and the machine. And at the centre that interface is trust.”
Pat Baird, Senior Regulatory Specialist at Philips, a Dutch health technology company, dove into the various dimensions of trust across three buckets: technical trust, regulatory trust, and human interaction trust.
Technical trust needs to answer whether technology is technically and logically capable of doing what it was designed to do? Are there steps taken to minimize bad data quality? Is the application solid or not?
Regulators are key stakeholders, so their concerns obviously need to be addressed, but what can be overlooked is the patient.
“[The patients are] going to have some questions. If you have a bad user interface for your product, people aren’t going to want it or trust it. So how you interact with the end-users is very important in building that trust,” Baird said.
AMA’s Ehrenfeld noted that trust is also tied to transparency.
“Having a clear regulatory framework, clear standards framework [is important], so we can establish clear rules of the road when it comes to the development of these technologies,” he said. “[Different] applications represent very different risk profiles and probably different approaches to oversight and how you get that transparency that ultimately establishes the level of trust needed.”
Baird notes that the sheer diversity of medical contexts makes “easier to think about specialized niche algorithms that work in that particular area as opposed to more general approaches, which might be more challenging.”
He adds that whatever the application, AI draws upon volumes of data so data quality is a huge issue. What worries Baird the most is that there may not be enough collaboration between the technical people developing the tools and the medical professionals that can decipher what the collected data actually means.
“I’m really afraid that we have these readily available tools that can do all kinds of analytics…. But we need [medical professionals] to give us that context to give us those other insights on the data,” he said.
Even if the algorithms are accurate and they are built to appropriate context, other barriers still exist to AI’s wider adoption in healthcare.
The AMA surveyed thousands of physicians across the country over the years about technology adoption and what it would take for doctors to adopt technology tools. Ehrenfeld says the concerns that came up over and over again were three things: Will it work? Will I get paid for it? And will I get sued if something goes wrong?
But there is a bright and shiny side of the coin for AI adoption. Baird tells of a nurse he met a decade ago. She said she became a nurse because she wanted to help people but lamented that, even then, she spent an inordinate amount of time working on computers.
“She said, ‘I spend more time taking care of machines than I do taking care of my patients,’” Baird recalled. “So what my hope is for AI in healthcare is that because technology already has dehumanized healthcare, I’m hoping that AI will help re-humanize healthcare, freeing up the caregivers, letting them give care, and let them leave the things that computers can do to the computers.”
For a deeper dive into the promise of AI, please check out some of ICTC’s research on AI.