Attackers can force Amazon Echos to hack themselves with self-issued commands – Ars Technica

Front page layout
Site theme
Sign up or login to join the discussions!

Academic researchers have devised a new working exploit that commandeers Amazon Echo smart speakers and forces them to unlock doors, make phone calls and unauthorized purchases, and control furnaces, microwave ovens, and other smart appliances.
The attack works by using the device’s speaker to issue voice commands. As long as the speech contains the device wake word (usually “Alexa” or “Echo”) followed by a permissible command, the Echo will carry it out, researchers from Royal Holloway University in London and Italy’s University of Catania found. Even when devices require verbal confirmation before executing sensitive commands, it’s trivial to bypass the measure by adding the word “yes” about six seconds after issuing the command. Attackers can also exploit what the researchers call the “FVV,” or full voice vulnerability, which allows Echos to make self-issued commands without temporarily reducing the device volume.
Because the hack uses Alexa functionality to force devices to make self-issued commands, the researchers have dubbed it “AvA,” short for Alexa vs. Alexa. It requires only a few seconds of proximity to a vulnerable device while it’s turned on so an attacker can utter a voice command instructing it to pair with an attacker’s Bluetooth-enabled device. As long as the device remains within radio range of the Echo, the attacker will be able to issue commands.
The attack “is the first to exploit the vulnerability of self-issuing arbitrary commands on Echo devices, allowing an attacker to control them for a prolonged amount of time,” the researchers wrote in a paper published two weeks ago. “With this work, we remove the necessity of having an external speaker near the target device, increasing the overall likelihood of the attack.”
A variation of the attack uses a malicious radio station to generate the self-issued commands. That attack is no longer possible in the way shown in the paper following security patches that Echo-maker Amazon released in response to the research. The researchers have confirmed that the attacks work against 3rd- and 4th-generation Echo Dot devices.
AvA begins when a vulnerable Echo device connects by Bluetooth to the attacker’s device (and for unpatched Echos, when they play the malicious radio station). From then on, the attacker can use a text-to-speech app or other means to stream voice commands. Here’s a video of AvA in action. All the variations of the attack remain viable, with the exception of what’s shown between 1:40 and 2:14:
The researchers found they could use AvA to force devices to carry out a host of commands, many with serious privacy or security consequences. Possible malicious actions include:
The researchers wrote:
With these tests, we demonstrated that AvA can be used to give arbitrary commands of any type and length, with optimal results—in particular, an attacker can control smart lights with a 93% success rate, successfully buy unwanted items on Amazon 100% of the times, and tamper [with] a linked calendar with 88% success rate. Complex commands that have to be recognized correctly in their entirety to succeed, such as calling a phone number, have an almost optimal success rate, in this case 73%. Additionally, results shown in Table 7 demonstrate the attacker can successfully set up a Voice Masquerading Attack via our Mask Attack skill without being detected, and all issued utterances can be retrieved and stored in the attacker’s database, namely 41 in our case.
You must to comment.
Join the Ars Orbital Transmission mailing list to get weekly updates delivered to your inbox.
CNMN Collection
WIRED Media Group
© 2022 Condé Nast. All rights reserved. Use of and/or registration on any portion of this site constitutes acceptance of our User Agreement (updated 1/1/20) and Privacy Policy and Cookie Statement (updated 1/1/20) and Ars Technica Addendum (effective 8/21/2018). Ars may earn compensation on sales from links on this site. Read our affiliate link policy.
Your California Privacy Rights | Do Not Sell My Personal Information
The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices


Leave a Reply

Your email address will not be published. Required fields are marked *