Artificial intelligence is coming to healthcare—that is an irrefutable reality. But AI diagnosis is still emerging, and flawed.
Imagine that you find an anomaly on your skin and call a dermatologist for an appointment.
You're accustomed to meeting with a doctor who can provide a diagnosis.
But according to recent research, technology advances are going to change the nature of patient diagnosis. In the not-too-distant future, your diagnosis may be generated—at least in part—by an artificially intelligent system.
A 2018 study in the Annals of Oncology compared a convolutional neural network (CNN), or machine learning (ML) system, with the determinations of 58 dermatologists. Using more than 100,000 images of malignant and benign tumors, the artificial intelligence (AI) system detected 95% of melanomas accurately, while human dermatologists found 86%.
These findings show that deep learning convolutional neural networks are capable of out-performing dermatologists, including extensively trained experts, in the task of detecting melanomas,” said professor Holger Haenssle, senior managing physician in the dermatology department at University of Heidelberg, Germany, and the lead author of the study, in a press release.
Source: PricewaterhouseCoopers survey of 2,500 consumers and business leaders
AI diagnosis is poised to grow over the next several years. Consultancy IDC projected worldwide spending on artificial intelligence and cognitive computing generally will reach $52.2 billion in 2021. Disease treatment is ranked second among industries using AI (see Figure 1).
And by 2025, 65% of all healthcare delivery processes will involve some form of AI. According to Reaction Data, 84% of clinicians say that they now use or will use machine learning (ML) in the next three years.
In a recent LinkedIn article on the use of AI in healthcare, Bertalan Meskó, an M.D. and Ph.D., as well as the director of the Medical Futurist Institute, noted that AI in medical diagnosis can enable far more targeted and effective healthcare decisions. Precision medicine—an emerging approach to disease treatment that factors in variability in individuals’ genes, environment and lifestyle—requires massive amounts of data to understand individual medical cases.
“As disruptive technologies appear on the stage of healthcare, it becomes possible to get down even more deeply to the roots of diseases and treatments,” Meskó wrote. “Medical professionals can move away from generalistic solutions towards personalization and precision.”
Still, patients and practitioners have significant reservations about AI diagnosis. While 54% of respondents to a recent survey of healthcare decision makers about AI in healthcare expect widespread adoption of AI within the next five years, 36% see a lack of trust in AI among patients, and 30% among clinicians, as a barrier to adoption.
In the LinkedIn piece, Meskó emphasized the immaturity of AI in healthcare.
“To avoid over-hyping technology,” he wrote, “the medical limitations of present-day AI have to be acknowledged.”
According to a 2017 article about the state of AI in cancer diagnosis, a supercomputer devoted to identifying and even discovering new forms of cancer is still in its infancy. Doctors still need to train computers with extensive data sets for them to identify known, prevalent cancers. And computers are far from ready to identify novel, uncharted approaches to cancer treatment.
Additionally, doctors have legitimate but also ingrained prejudices about AI diagnosis. They may shrug off an AI diagnosis if it corroborates their own chosen course of treatment. But they may also ignore AI algorithms’ recommendations if they suggest a differing course of treatment from the chosen path. AI diagnosis is not mature enough to be perceived as an alternative consulting physician, of sorts, whose input may influence the course of treatment.
Further, doctors say it’s still quite labor-intensive to train computers and feed them the necessary data sets to learn to identify diseases as complex as cancer. It can be a struggle to update the data sets, practitioners say.
Trust is the cornerstone of AI diagnosis maturity. Today, AI diagnosis-focused algorithms are regarded as “black boxes” that can’t be trusted because the mathematics underlying their assumptions can’t be easily understood. Practitioners, insurance companies and regulators want to understand the underlying assumptions that form the basis of these decisions.
Transparent algorithms remain a work in progress. Fairness, Accountability and Transparency in Machine Learning, a machine learning community of researchers, has generated a series of principles underlying accountable algorithms.
Ultimately, most practitioners look to AI diagnosis as a still-nascent science that should complement rather than supplant human diagnosis.
Recent data supports this approach. In a recent study on AI diagnosis of breast cancer incidence, the error rate for humans was 3.5%. The error rate for algorithms was 2.8%. When humans were assisted by algorithms, the error rate was even lower.
Rasu Shrestha, chief innovation officer at the University of Pittsburgh Medical Center, noted in an article on AI diagnosis that the word artificial in artificial intelligence may be a miscue.
“We need to take the artificial away from our embrace of the technology,” Shrestha said in the article. “The term ‘AI’ should stand for ‘augmented intelligence.’”
Lauren Horwitz is the managing editor of Cisco.com, where she covers the IT infrastructure market and develops content strategy. Previously, Horwitz was a senior executive editor in the Business Applications and Architecture group at TechTarget;, a senior editor at Cutter Consortium, an IT research firm; and an editor at the American Prospect, a political journal. She has received awards from American Society of Business Publication Editors (ASBPE), a min Best of the Web award and the Kimmerling Prize for best graduate paper for her editing work on the journal article "The Fluid Jurisprudence of Israel's Emergency Powers.”