Alexa, the voice assistant built into the Amazon Echo, is one of many artificially intelligent (AI) personal assistants being deployed by technology companies to help consumers manage their homes and schedules. Amazon's gadget, which is quickly emerging as a strong rival to Apple's Siri and Google's Assistant, was a big hit at this year's Consumer Electronics Show (CES) in Las Vegas.
Yet as a recent murder case illustrates, AI assistants are creating thorny legal and privacy questions that legal and cybersecurity experts are scrambling to understand. Because virtual assistants rely on microphones that, in some cases, may be continuously recording and sending information, that trove of information creates a delicate balance between law enforcement requests, corporate strategy and individual privacy rights.
In 2015, an Arkansas man was found dead in a hot tub, and investigators issued a warrant to Amazon, requesting the company turn over audio recordings and information captured by an Echo smart speaker owned by the suspect. Although the internet retailer declined to give authorities the requested information, at least a few experts say the case may be a foreshadowing of things to come.
That is because the convenience of voice-activated devices, which passively listen for a "hot word" or a "wake word" in order to activate, may come at a cost of individual privacy. In order to function, the device must constantly record and process all sound all the time, hoping to pick up on the wake word.
A big part of the onus lies on the companies manufacturing the technology, explained Andrew Crocker, a staff attorney with the digital rights group Electronic Frontier Foundation, in an interview. "We can still insist that these companies protect our privacy when the government comes for that data," he said.