As an industry reliant on patient records and beset by outdated technology, health care is widely thought to be a prime target for an artificial intelligence revolution.
Many believe the technology will provide a host of benefits to clinical practitioners, speeding up the overall experience and diagnosing illnesses early on to identify potential treatment.
Just two days ago, DeepMind, an AI (artificial intelligence) firm owned by Google, said it had lent its technology to London's Moorfields Eye Hospital for groundbreaking research into detecting eye diseases. It was used to scan and identify more than 50 ophthalmological conditions. DeepMind's machine-learning technology made correct diagnoses 94 percent of the time, Moorfields said.
The development indicated that AI can analyze health problems with as much accuracy as a doctor. But some doctors worry that those in the tech world think AI can not only help clinicians, but even do a better job.
Take Babylon Health for instance, which in June said its AI chatbot was able to diagnose medical conditions as accurately as a doctor. The firm's chatbot scored a higher-than-average test score on a practice exam compiled for physicians.
Babylon's chatbot passed 82 percent of the test's questions, versus the average mark for human doctors of 72 percent.
But the Royal College of General Practitioners (RCGP), an industry body representing doctors that treat a wide range of common illnesses, quickly disputed the claim that AI could diagnose illnesses with the same effectiveness of a human medical practitioner.
"No app or algorithm can do what a GP does," Helen Stokes-Lampard, a professor and chair of the RCGP, told CNBC earlier this week. "Every day we deliver care to more than a million people across the U.K., taking into account the physical, psychological and social factors that may be impacting on each person's health."
Stokes-Lampard continued: "We consider the different health conditions a patient is living with, their family history, any medications they might be taking, and a myriad of other considerations when formulating a treatment plan."
Babylon at the time denied it had claimed an AI could do the job of a GP, saying that it supported a model where AI is complementary to medical practice.