An effort to use voice-assistant devices – such as Amazon’s Alexa – to detect signs of memory problems has received a boost with a grant from the US federal government
This is an edited version of an article first published by Medscape
Researchers from Dartmouth-Hitchcock and the University of Massachusetts Boston will get a four year, $1.2 million, grant from the National Institute on Aging. The team hopes to develop a system that would use machine and deep learning techniques to detect changes in speech patterns to determine if someone is at risk of developing dementia or Alzheimer’s.
“We are tackling a significant and complicated data-science question – whether the collection of long-term speech patterns of individuals at home will enable us to develop new speech-analysis methods for early detection of this challenging disease,” Xiaohui Liang, an assistant professor of computer science from the University of Massachusetts Boston, said. “Our team envisions that the changes in the speech patterns of individuals using voice assistant systems may be sensitive to their decline in memory and function, over time.”
John Batsis, a member of the team and associate professor of medicine at the Geisel School of Medicine at Dartmouth, stated that the system would help families better plan for care should someone develop a cognitive impairment.
“Alzheimer’s disease and related dementias are a major public health concern that lead to high health costs, risk of nursing home placement and place an inordinate burden on the whole family,” he said. “The ability to plan in the early stages of the disease is essential for initiating interventions and providing support systems to improve patients’ everyday function and quality of life.”
John admitted this was a novel approach and that challenges lie ahead in developing the system he and the other researchers plan to eventually test in people’s homes. This system, in theory, would aim to pick up changes in a person’s speech pattern, intonation and lexicon, he said, but researchers also would have to figure out how to make the system work for a myriad of languages, when there are multiple people speaking in the room, or when someone mumbles or doesn’t speak clearly.
“These are all pragmatic and practical issues,” he said.
Should a system, one day, be sold commercially, researchers envision that patients, their family or caregivers would choose to enable the system on their voice assistant. “A huge challenge is that of privacy,” John said. “You need to think about these things. Older adults who may be at risk, or whose family members are concerned about this, need to have buy-in for that.”
Several experts, who were not part of the research, welcomed the focus of the research. “Imagine if we had another tool to help diagnose this, and if that tool helped us detect it early,” Alicia Nobles, an assistant professor in the Department of Medicine at University of California, San Diego, and the co-founder of the Center for Data-Driven Health at the Qualcomm Institute, said. She noted that detecting impairments early may be ‘crucial’ in helping patients and their caregivers manage their care.
Sarah Lenz Lock, the senior vice president for policy at AARP and the executive director of the Global Council on Brain Health, also felt the research looked promising. “We need to assure that people’s privacy is maintained through the expanded use of technology in this way – but speech patterns present a promising area for early screening of cognitive decline.”
Don’t forget to follow us on Twitter, or connect with us on LinkedIn!
Be the first to comment