Amazon Dismisses Claims Alexa ‘Skills’ Can Bypass Security Vetting Process

amazon Alexa security

Researchers found a number of privacy and security issues in Amazon’s Alexa skill vetting process, which could lead to attackers stealing data or launching phishing attacks.

Researchers warn Amazon’s voice assistant Alexa is vulnerable to malicious third-party “skills” – voice assistant capabilities developed by third parties – that could leave smart-speaker owners vulnerable to a wide range of cyberattacks.

The security-threat claim is roundly dismissed by Amazon.

Researchers scrutinized 90,194 unique skills from Amazon’s skill stores across seven countries. The report, presented at the Network and Distributed System Security Symposium 2021 this week, found widespread security issues that could lead to phishing attacks or the ability to trick Alexa users into revealing sensitive information.

“While skills expand Alexa’s capabilities and functionalities, it also creates new security and privacy risks,” said a group of researchers from North Carolina State University, the Ruhr-University Bochum and Google, in a research paper (PDF).

“We identify several gaps in the current ecosystem that can be exploited by an adversary to launch further attacks, including registration of arbitrary developer name, bypassing of permission APIs, and making backend code changes after approval to trigger dormant intents,” they said.

An Amazon spokesperson told Threatpost that the company conducts security reviews as part of skill certification, and has systems in place to continually monitor live skills for potentially malicious behavior.

“The security of our devices and services is a top priority,” said the Amazon spokesperson. “Any offending skills we identify are blocked during certification or quickly deactivated. We are constantly improving these mechanisms to further protect our customers. We appreciate the work of independent researchers who help bring potential issues to our attention.”

What is an Amazon Alexa Skill?

A skill is essentially an application for Alexa, made by third-party developers, which can be installed or uninstalled by users on their corresponding Alexa smartphone app. These skills have a variety of functionalities – from reading stories to children, to interacting with services like Spotify.

For developers to build a skill, they need the following elements:

  • An invocation name identifying the skill
  • A set of “intents,” which are the actions Alexa users must take to invoke the skill
  • Specific words or phrases that users can utilize to invoke the desired intents
  • A cloud-based service to accept requests and consequently act on them
  • A configuration that brings the intents, invocation names and cloud-based service together, so Alexa can route the correct requests to the desired skill

Finally, before the skills can be actively made public to Alexa users, developers must submit their skills to be vetted and verified by Amazon. During this vetting process, Amazon ensures that the skills meet their policy guidelines.

For instance, Amazon makes sure that the privacy policy link for the skill is valid, and that the skill meets the security requirements needed for hosting services on external servers (by checking whether the server responds to requests that aren’t signed by an Amazon-approved certificate authority, for instance).

Amazon’s Alexa Skill Vetting is Lacking

However, researchers said they found several glaring issues with Amazon’s skill vetting process. For one, developers can get away with registering skills that use some (but not others) well-known company names – such as Ring, Withings or Samsung. Bad actors could then leverage these fake skill brand names by sending phishing emails to users that link to the skill’s Amazon store webpage – ultimately adding an air of legitimacy to the phishing message and tricking users into handing over valuable information.

Amazon Alexa Skills

Credit: Researchers with North Carolina State University, the Ruhr-University Bochum and Google

Researchers said they found 9,948 skills in the U.S. skill store, for instance, that shared the same invocation name with at least one other skill – and across all skill stores, they found that only 36,055 (out of the 90,194) skills had a unique invocation name.

“This primarily happens because Amazon currently does not employ any automated approach to detect infringements for the use of third-party trademarks, and depends on manual vetting to catch such malevolent attempts which are prone to human error,” said researchers.

Another issue highlighted by researchers is that attackers can make code changes after their skills have been approved by Amazon, opening the door for various malicious intents. The issue here stems from the ability for developers to register various intents during the certificate process.

“Thus, an attacker can register dormant intents which are never triggered during the certification process to evade being flagged as suspicious,” said researchers. “However, after the certification process the attacker can change the backend code (e.g., change the dialogue to request for a specific information) to trigger dormant intents.”

In a real-world scenario, this could open the door for attackers to make code changes that could convince a user into revealing sensitive information – such as bank account details or otherwise.

Issues With Alexa Privacy Policy Model

Researchers said that this requesting of sensitive information points to a larger overarching, conceptual (rather than technical implementation) issue.

Alexa skills can be configured to request permissions from users to access personal information from the Alexa account –  such as the user’s address or contact information. However, researchers said that they uncovered instances where skills bypass the permission APIs and directly request such information from end users.

Amazon Alexa skills

Credit: Researchers with North Carolina State University, the Ruhr-University Bochum and Google

Some skills, for instance, included the name of a user’s specific locations as part of the invocation phrase. Researchers pointed to local news provider “Patch,” which created 775 skills that include a city name. Such skills can potentially be used to track one’s whereabouts, they argued.

“One could argue that this is not an issue as users explicitly provide their information, however, there may be a disconnect between how developers and users perceive the permission model,” said researchers. “A user may not understand the difference between providing sensitive data through the permission APIs versus entering them verbally.”

In another privacy issue, researchers found that 23.3 percent of the privacy policies viewed for skills were not fully disclosing the data types that were associated with permissions requested by a skill. For instance, 33 percent of skills accessing a user’s full name did not disclose that type of data collection in their privacy policy.

Amazon Alexa: Previous Skills Hacks

Alexa skills have come under scrutiny in the past, starting in 2018 when researchers created a proof-of-concept “rogue skill” that could eavesdrop on Alexa users – and automatically transcribe every word said.

In 2019, researchers said that vulnerabilities stemming from skills could enable what they called a “Smart Spies” hack, which allows for eavesdropping, voice-phishing, or using people’s voice cues to determine passwords.

Amazon, for its part, in 2019 did make a few modifications to make this “Smart Spies” hack more difficult – However, researchers called the mitigations are “comically ineffective,” saying that Amazon (and other voice assistant makers, such as Google) need to focus on weeding out malicious skills from the getgo, rather than after they are already live.

Finally, as recently as August, researchers disclosed flaws in Alexa that could allow attackers to access personal data and install skills on Echo devices.

“Our analysis shows that while Amazon restricts access to user data for skills and has put forth a number of rules, there is still room for malicious actors to exploit or circumvent some of these rules,” said researchers this week. “This can enable an attacker to exploit the trust they have built with the system.”

Suggested articles