While experts are mostly in agreement about the benefits AI will provide medical practitioners — such as diagnosing illnesses very early on and speeding up the overall healthcare experience — some doctors and academics are wary we could be headed in the direction of data-driven medical practices too fast.
One fear among academics is that people are expecting too much of AI, assuming it can form the kind of general intelligence that humans possess to solve a broad range of tasks.
"All the successful AI applications to date are incredibly successful, but in a very narrow range of application," said the University of Edinburgh's Bundy.
According to Bundy, these expectations could have potentially dire consequences for an industry like healthcare. "A medical diagnosis app, which is excellent at heart problems, might diagnose a cancer patient with some rare kind of heart problem, with potentially fatal results," he said.
Just last week, a report by health-focused publication Stat cited internal IBM documents showing that the tech giant's Watson supercomputer had made multiple "unsafe and incorrect" cancer treatment recommendations. According to the article, the software was trained only to deal with a small number of cases and hypothetical scenarios rather than actual patient data.
"We created Watson Health three years ago to bring AI to some of the biggest challenges in healthcare, and we are pleased with the progress we're making," an IBM spokesperson told CNBC.
"Our oncology and genomics offerings are used by 230 hospitals around the world and have supported care for more than 84,000 patients, which is almost double the number of patients as of the end of 2017."
The spokesperson added: "At the same time, we have learned and improved Watson Health based on continuous feedback from clients, new scientific evidence and new cancers and treatment alternatives. This includes 11 software releases for even better functionality during the past year, including national guidelines for cancers ranging from colon to liver cancer."
Another concern is that the volume of data gobbled up by computers and shared about — as well as the data-driven algorithms that automate applications by using that data — could hold ethical implications over the privacy of patients.
The dawn of big data, now a multi-billion dollar industry covering everything from trading to hospitality, means that the amount of personal information that can be collected by machines has ballooned to an unfathomable size.
The phenomenon is being touted as a breakthrough for the mapping out of various diseases, predicting the likelihood of someone getting seriously ill and examining treatment ahead of time. But concerns over how much data is stored and where it is being shared are proving problematic.
Take DeepMind, for example. The Google-owned AI firm signed a deal with the U.K.'s National Health Service in 2015, giving it access to the health data of 1.6 million British patients. The scheme meant that patients handed their data over to the company in order to improve its programs' ability to detect illnesses. It led to the creation of an app called Streams, aimed at monitoring patients with kidney diseases and alerting clinicians when a patient's condition deteriorates.
But last year, U.K. privacy watchdog the Information Commissioner's Office ruled that the contract between the NHS and DeepMind "failed to comply with data protection law." The ICO said that London's Royal Free Hospital, which worked with DeepMind as part of the agreement, was not transparent about the way patients' data would be used.