- A clinical study aims to find out if artificial intelligence is better suited to talk death with patients than doctors.
- Specially designed chatbots are being used in palliative care to see if patients might be more inclined to share their symptoms or ask questions when no other human is present.
Chatbots are used for a variety of tasks: ordering pizza, getting product suggestions via Facebook Messenger and receiving online customer support. But can they cope with death?
A three-year clinical study with financial backing of more than $1 million from the National Institutes of Health is exploring whether a chatbot can help terminally ill, geriatric patients with their end-of-life care.
Over the next three years, Northeastern University professor Timothy Bickmore and Boston Medical Center doctor Michael Paasche-Orlow will distribute Microsoft Surface tablets preloaded with a chatbot to about 360 patients who have been told they have less than a year to live.
Designed in consultation with experts from Boston Medical Center and programmed by Bickmore and other Northeastern University researchers, the chatbot — which takes the form of a middle-age female digital character — is preloaded with a number of capabilities. These include clinical ones — such as gauging a patient's level of pain and keeping tabs on whether medication is being taken — as well as ones to improve a patient's quality of life. There are modules for talking about stress management and promoting exercise, a social chat feature if patients are just looking for someone to talk to, and even a module for spiritual counseling.
What Bickmore and Paasche-Orlow expect is that geriatric patients going through palliative care near the end of their lives will get use out of a tablet-based chatbot, and that having such a service available to hospitals and clinics will be valuable to patients long before they're in hospice care.
"Primary-care physicians don't think to call in palliative-care services until patients are incurable, when in fact the patient might have been in pain or could've used some kind of intervention beforehand," said Bickmore, associate dean for research in Northeastern University's College of Computer and Information Science.
Today about 90 million Americans live with a serious illness, according to the Center to Advance Palliative Care. This number is expected to double over the next 25 years as the baby boomer generation ages.
The interactions patients are having with these chatbots are monitored continuously by nurses, who can activate care if a patient tells the chatbot they're experiencing symptoms. The nurses will also alert a family member if patients are telling the chatbot they're thinking about making end-of-life decisions, like completing a last will and testament.
"If a patient rates their nausea or pain a little higher, we ask them if they've taken medicine for that and then try to figure out and troubleshoot that experience," Paasche-Orlow said. "With a lot of these types of things, humans just forget to follow up on them, so there's a lot of lost opportunities to support people in different ways."
For a decade Bickmore and Paasche-Orlow have collaborated on health IT projects that make use of conversational artificial intelligence, or what Bickmore calls relational agents: computer agents designed to simulate face-to-face conversations with other people, as well as pick up on gesticulations, facial expressions and body posture.
Their latest endeavor began with a call for technologies that could potentially assist older patients in the last stages of a terminal illness. These would be issued by a number of high-level institutes, such as the NIH, National Institute of Nursing Research, National Cancer Institute and the National Institute on Aging.
In the medical world, conversational artificial intelligence elicits a mixed response. It's a potentially transformative technology, and something against which doctors and patients should guard themselves. Research around chatbots being used for mental health patients published in 2016 in the Journal of the American Medical Association demonstrated that some patients are more likely to display true emotions when they think they're talking to a computer, an insight that could lead to further deployment of conversational agents as a means to automate and lower the costs of clinical treatments.
But there are risks of "ineffective care and patient harm," as the JAMA research said. In particular, researchers singled out digital voice assistants of the kinds created by large tech companies, like Apple, Google, Microsoft and Samsung. Certainly, those voice assistants are not intended to act as de facto doctors, but the JAMA research found that when people asked their digital voice assistants questions related to their mental health, responses were "inconsistent and sometimes inappropriate."
"There's a growing number of chatbots or characters out there that pretend to be a health oracle," Bickmore said. "That's a real setup for safety issues for patients."
Users of the tablet-based chatbot in the palliative-care study are prevented from giving open-ended responses. Whenever it's a patient's turn to say something to the chatbot, they're given prompts on the screen, multiple-choice style.
"We know exactly what their intent is, and they can't go off topic or talk about something we hadn't considered," Bickmore said.
Now in year two of a five-year project that began in 2016, the chatbot project has received $1.3 million, with more funding contingent on the success of the clinical trial that began this year.
"We are becoming an increasingly technology-dependent country, and so are our aging seniors," said Dr. Jeri Miller, chief of the Office of End-of-Life and Palliative Care Research within the National Institute of Nursing Research. "[Bickmore and Paasche-Orlow] have developed this innovative platform that has a way to help individuals who have serious advanced illnesses think through important factors early: What kind of care do I want? How am I managing my medications? What are my spiritual goals and values?"
The focus on spiritual goals and values was the key component of an initial lab study conducted by Bickmore and Paasche-Orlow at Northeastern University. They tested the chatbot with 44 people age 55 and older, specifically around topics related to preparation for death and spiritual counseling. What they found was that after speaking to the chatbot in a 30-minute conversation, most participants had a significant decrease in anxiety around thinking about death, even those who identified as atheist or not overly religious.
"It turns out that patients were very happy to talk with a computer about it. They were very explicit in telling us, 'The doctor never asked me about these things,'" Paasche-Orlow said.
In a clinical setting, patients are sometimes afraid to ask explicit questions of a doctor because they worry doctors don't have the time for them. Instead, Paasche-Orlow said, they tend to drop hints and clues they want to talk about certain things — like chatting with a hospital chaplain, for example — and wait for the health-care provider to pick up on them.
"In a hospital, frequently the patient is there all day and the doctor comes in at some point during the morning on rounds, and sometimes with a whole group of people. It's not conducive for patients to feel comfortable to ask questions, especially with something where they don't know how long it'll take or how their question will be received," Paasche-Orlow said.
With a chatbot the stakes are lower — patients might be more inclined to share their symptoms or ask questions they might not ask a doctor. By cataloging and keeping track of those responses, the chatbot in turn makes it easier for doctors, nurses and family caregivers to better coordinate their responses, ensuring the right health care is delivered at the right time.
The three-year clinical trial currently under way will bear out if this theory is correct.
"We had not designed a conversational agent for patients with advanced needs for palliative care until this project," Paasche-Orlow said. "But we thought this might be an interesting place where health care can be supported."
— By Andrew Zaleski, special to CNBC.com