Amazon Alexa and Google Home are the most used personal assistants in the world right now. Their use is increasing very rapidly and research on security vulnerabilities involving these devices is providing some interesting hacks. While researching vulnerabilities in home smart assistants I came across an article about hackers using the Google Home and Amazon Alexa to eavesdrop on unsuspecting users and even perform phishing using the same hack. The hack is a form of third-party software that embeds malicious code into the home assistants. In this document I will go over exactly how malicious developers utilize this hack to eavesdrop on unsuspecting users, what Amazon and Google are doing to prevent this type of malicious behavior within their devices, and what are some preventive measure you can take to make sure you do not fall victim of malicious third party software for your smart home assistant.
Google and Amazon let developers make their own third-party actions or skills for their smart assistants. For instance, a developer could make a calculator action or skill for a smart assistant where the user can ask the smart assistant to add two plus three. There is a way for developers to design these skills or actions so that the assistant will keep listening even after the action or skill has completed its task. The security researchers have made skills and actions that simulate silence by inserting the character sequence of “�. ” (U+D801, dot, space), and this allowed the developed actions or skills to keep listening to conversations in the background when the user thinks that the assistant has finished listening (Ng, 2019). Both Google and Amazon assistants have an option to disclose your conversations with the assistants to improve the recognition of commands or phrases that a user might say to it. With the eavesdropping hack mentioned above where the third-party skills or actions can keep listening in the background, whoever the third-party developer is that injected this malicious hack into the assistant can collect conversations while the user would not even know that it was recording.
With this eavesdropping hack the developers have even worked out a way to do phishing for passwords. They would design their skills or actions for the assistant to speak to the user something like “An important security update is available for your device. Please say ‘start update’ followed by your password.” (Ng, 2019). Unsuspecting users that maybe have a little too much trust in their assistants might fall for this kind of phishing attack although Google and Amazon try to make it clear that you should never need to give your assistant your password. Another thing about Google and Amazon telling users to never give their password to their assistant is a conflict with one of the resolutions to the laser hacking which is to have a password to give to the assistant for it to be able to process sensitive commands like purchases or unlocking doors.
Google and Amazon both have a vetting process for developers who make applications for their smart assistants. They say that after reviewing the researchers’ evidence that they have found and removed malicious applications that are of concern. Even though both companies have their vetting process, it seems that the companies do not vet updates to already existing applications which would allow developers to make a simple application that abides by the standards. Once it is approved, they could actually make an update to the application injecting the malicious code thereby bypassing the original vetting process (Porter, 2019). Smart assistant makers like Google and Amazon say that they have a vetting process for not allowing specific skills or actions to be performed by their smart assistants, although security researchers have made these malicious apps that actually worked and it took time for Google and Amazon to remove them only after they were informed about the malicious behavior. The security researchers were from SRLabs who figured out this eavesdropping and phishing vulnerabilities and before making the information public the disclosed everything to Google and Amazon (Porter, 2019).
One way to prevent this kind of malicious behavior on your smart assistant is to not install third-party applications on your device. That seems a little too excessive but there are potentially many malicious applications out there and it may pose a risk to your smart assistant. Google and Amazon have settings that let you see what data has been used from your assistant and enable or disable certain actions or skills. Users should keep track of what specific actions or skills that their smart assistants are utilizing, and I would say that if your smart assistant asks or prompts you for any sensitive information that you should definitely not disclose it. There are many vulnerabilities in the smart assistants these days and they will have to be resolved by the makers of the devices. Although these hacks do not seem to have been used by any third-party developers other than the security researchers at SRLabs, the consumer should always be careful about the information that they disclose to any type of electronic medium. More and more people are using smart assistants because of the convenience that they provide for doing certain tasks and they need to be careful.
Works Cited
Ng, A. (2019, October 19). Alexa and Google Assistant fall victim to eavesdropping apps. Retrieved from cnet.com: https://www.cnet.com/news/alexa-and-google-voice-assistants-app-exploits-left-it-vulnerable-to-eavesdropping/
Porter, J. (2019, October 21). Security researchers expose new Alexa and Google Home vulnerability. Retrieved from theverge.com: https://www.theverge.com/2019/10/21/20924886/alexa-google-home-security-vulnerability-srlabs-phishing-eavesdropping