Researchers from the University of London and the University of Catania have discovered how to weaponize Amazon Echo devices to hack themselves.
The – dubbed “Alexa vs. Alexa” – leverages what the researchers called “a command self-issue vulnerability”: using pre-recorded messages which, when played over a 3rd– or 4th-generation Echo speaker, causes the speaker to perform actions on itself.
How to Make Alexa Hack Itself
Smart speakers lay dormant during the day, waiting for a user to vocalize a particular activation phrase: i.e., “Hey, Google,” “Hey, Cortana” or, for the Amazon Echo, “Alexa,” or simply, “Echo.” Usually, of course, it’s the device’s owner who issues such commands.
However, researchers found that “self-activation of the Echo device [also] happens when an audio file reproduced by the device itself contains a voice command.” And even if the device asks for a secondary confirmation, in order to perform a particular action, “the adversary only has to always append a ‘yes’ approximately six seconds after the request to be sure that the command will be successful.”
To get the device to play a maliciously crafted recording, an attacker would need a smartphone or laptop in Bluetooth-pairing range. Unlike internet-based attacks, this scenario requires proximity to the target device. This physical impediment is balanced by the fact that, as the researchers noted, “once paired, the Bluetooth device can connect and disconnect from Echo without any need to perform the pairing process again. Therefore, the actual attack may happen several days after the pairing.”
Alternatively, the report stated, attackers could use an internet radio station, beaming to the target Echo like a command-and-control server. This method “works remotely and can be used to control multiple devices at once,” but would required extra steps, including tricking the targeted user into downloading a malicious Alexa “skill” (app) to an Amazon device.
Using the Alexa vs. Alexa attack, attackers could tamper with applications downloaded to the device, make phone calls, place orders on Amazon, eavesdrop on users, control other connected appliances in a user’s home and more.
“This action can undermine physical safety of the user,” the report stated, “for example, when turning off the lights during the evening or at nighttime, turning on a smart microwave oven, setting the heating at a very high temperature or even unlocking the smart lock for the front door.”
In testing their attack, the authors were able to remotely turn off the lights in one of their own homes 93 percent of the time.
Smart Speakers Are Uniquely Vulnerable
Because they’re always listening for their wake word, and because they’re so often interconnected with other devices, smart speakers are prone to unique security vulnerabilities. The Echo series of devices, in particular, has been linked with a series of privacy risks, from microphones “hearing” what people text on nearby smartphones to audio recordings being stored indefinitely on company servers.
The physical proximity required for Bluetooth, or having to trick users into downloading malicious skills, limits but does not eliminate the potential for harm in such a scenario as the Alexa vs. Alexa report described, according to John Bambenek, principal threat hunter at Netenrich. Those living in dense cities are potentially at risk, and individuals “at most risk are those in domestic violence scenarios,” he wrote, via email. For that reason, “simply accepting the risk isn’t acceptable.”
The research prompted Amazon to patch the command self-issue vulnerability, which is the benefit of having a robust threat-hunting culture.
“Most people aren’t evil,” wrote Bambenek. “It is hard to test new technology against criminal intent because even testers lack the criminal mindset (and that’s a good thing for society). As technology gets adopted, we find things we overlook and make it better.”
For its part, Amazon gave Threatpost the following statement:
“At Amazon, privacy and security are foundational to how we design and deliver every device, feature, and experience. We appreciate the work of independent security researchers who help bring potential issues to our attention, and are committed to working with them to secure our devices. We fixed the remote self-wake issue with Alexa Skills caused by extended periods of silence resulting from break tags as demonstrated by the researchers. We also have systems in place to continually monitor live skills for potentially malicious behavior, including silent re-prompts. Any offending skills we identify are blocked during certification or quickly deactivated, and we are constantly improving these mechanisms to further protect our customers.”
The latest, patched version of Alexa device software can be found here.
This posting was updated on March 8 at 1:30 p.m. ET to include Amazon’s statement.
Moving to the cloud? Discover emerging cloud-security threats along with solid advice for how to defend your assets with our FREE downloadable eBook, “Cloud Security: The Forecast for 2022.” We explore organizations’ top risks and challenges, best practices for defense, and advice for security success in such a dynamic computing environment, including handy checklists.