Smart Thermostats

While doing security research dealing with IOT devices and home security, I decided to look into smart thermostats.  While Googles Nest series of IOT devices seem to have the greatest security built into them it appears that the Google Nest Thermostat has a security vulnerability.  In this day in age even the big companies like Google, Amazon, Samsung, etc.… still have vulnerabilities in their so called “Smart” devices.  Even though these big companies still sometimes overlook a vulnerability in their IOT devices, purchasing a device from a big company is probably the consumers best bet in getting the best security tested device.  Google is not a newcomer into the world of security, and they seem to do a very good job of releasing secure devices into the world but the war dealing with computer security is a never-ending battle and sometimes hackers even get one over on Google.  I am a big fan of Google and what they do in the technological world and I have many Google devices powering my home.  Fortunately, I do not have the Google Nest Thermostat and I don’t know if I will ever wind up getting a Smart thermostat anyway.  In this document I will go over what the Google Nest Thermostat is, how this vulnerability can be taken advantage of, and what Google is doing, or the user can do to prevent becoming victim to this vulnerability.

         The Google Nest Thermostat is a smart thermostat that is part of Googles line of IOT devices.  The capabilities of this thermostat are built in WIFI, a temperature sensor, a humidity sensor, 24-bit color display, and is available in 5 different languages (Google, 2020).  If you would like to buy one of these smart thermostats it will cost about $249.00.  Google states on their web site that this thermostat is compatible with 95% of heating and cooling systems.  This thermostats operating system is Linux based and has menus for switching from heating to cooling, access to device settings, energy history, and scheduling (Wikipedia, 2020).  Since its first appearance in the market, it has been given many security updates and to get an update it requires two factor authentications.  This thermostat connects to all other Nest devices through a protocol called Weave that is done over WIFI.

         The vulnerability on the Google Nest thermostat is taken advantage of by connecting to it by the USB port with a flash drive and while holding the power button for 10 seconds a person can inject malicious software into the devices (Wagenseil, 2014).  This malicious software can be of any type mainly botnets or spying software.  Note that to achieve gaining access to this vulnerability the malicious user has to have physical access to the device.  This type of hack was demonstrated by three security researchers at a Blackhat conference on August 7, 2014.  The three researchers’ names were Yier Jin, Grant Hernandez, and Daniel Buentello (Wagenseil, 2014).  The nest thermostat appears to have very good security when it comes to wireless communications, but the USB is quite insecure.  The nest devices know much of a user’s private information like if they are home or not, their postal code, usernames and passwords.  Since it knows this type of information, this vulnerability could be very dangerous if the right hacker gains access to it.  The malicious software injected into the nest thermostat can also be used to gain access to other devices on the network which could be done using an ARP(Address Resolution Protocol) tool (Tilley, 2015).

         The Nest company is trying to become the biggest name in home connected devices.  The company’s founder says that they have a team in place to test for vulnerabilities and that they do extensive testing on all of their devices.  I am sure that after this vulnerability was found out that the Nest company has pushed an update of some sort to patch the device.  The Nest company has also stated that if a hacker has physical access to a device no matter what device it is, that they can potentially hack the device.  To prevent becoming a victim of this type of attack I think the user should always be aware of who they let inside their home.  Since physical access to the device is necessary to exploit this vulnerability, like some other devices I have researched, being aware of who is in your home is a big thing.  The user should always make sure that all of their IOT devices are up to date with the latest security patches.  The reason that the Nest thermostat allows a person to connect with a flash USB and load software onto it is so the firmware can be manually updated.  I guess the Nest company did not see this as a great threat.

         To conclude I would like to state that I like the Google Nest Company.  I think that they are on a solid path to becoming the most used IOT home devices.  I like how all of their devices work together on the same network and I think that the company itself takes great pride in their security practices.  This hack may possibly be an example of something that could be inevitable in the process of designing an IOT device.  Maybe Nest is right that any device can be hacked if a malicious user has physical access to it.  I personally do not have a smart thermostat because I only change the temperature on mine about twice a year, but I love the idea of having your thermostat become smart so that if you are away on vacation or something like that you can control the temperature of your home from far away.

Works Cited

Google. (2020, April 9). Nest Thermostat Specifications. Retrieved from

Tilley, A. (2015, March 6). How Hackers Could Use A Nest Thermostat As An Entry Point Into Your Home. Retrieved from

Wagenseil, P. (2014, August 7). Nest Smart Thermostat Can Be Hacked to Spy on Owners. Retrieved from,news-19290.html

Wikipedia. (2020, April 9). Nest Learning Thermostat. Retrieved from

Garage Door Opener

Researching security vulnerabilities in IOT devices, specifically dealing with the issue of home security I decided to look into garage door openers.  I came across this article that says there is a man who has developed a way to crack the code of the ordinary garage door.  This kind of hack could be disastrous to a homeowner because it lets a malicious user gain access to many houses that have an entrance to their house through the garage.  There is not just one garage door opener that is susceptible to this hack, many garage doors are.  I personally would not want anyone to be able to open my garage door and gain access to my house and with the knowledge of this research I am going to look into the brand and the garage door opener that I have to make sure that this vulnerability is not able to be done on my garage door.  Many people have these garage door openers that are susceptible to this hack and they may need to upgrade their garage door to prevent being a victim of this attack.  There are actually two kinds of codes that can be cracked by this security researcher’s method, the first one was discovered earlier than the other because the later is more difficult to hack.  These two kinds of codes are called “fixed code” and “rolling code” and there are many garage door openers that use these kinds of codes to allow access to your garage and possibly your house.  In this document I will go over what garage door openers are susceptible to this attack, how the attack is implemented, and what a user can do to prevent being a victim of this type of attack.

There are many brands of garage door openers that are susceptible to this hack.  Some of them include Nortek, NorthShore, Chamberlain, Liftmaster, Stanley, Delta-3, and Moore-O-Matic (Kamkar).  The attack was designed by Samy Kamkar and he states that only garage doors with fixed code entry systems are vulnerable to this attack although he has also developed a hack for garage doors with the rolling code entry systems.  Samy Kamkar has presented his rolling code hack for garage doors at DEFCON 23.  The difference between fixed code and rolling code is that the fixed code system uses the same two-character code every time to open the door and rolling code changes the two-character code every time the garage door is opened.  There are other kinds of systems that use hopping codes, Security+, or Intellicode and Samy Kamkar says that these doors may be safer from this kind of hack but they are not foolproof (Kamkar). has requested comment from companies like Nortek and Genie but they didn’t respond at first about the vulnerability and then posted on their website it is stated that they use rolling codes (Greenberg, 2015).  Later on, after Kamkar figured out how to hack fixed code systems he also found out how to hack rolling code systems.  The owners of the Liftmaster brand of garage door openers stated that they have not used fixed code systems since 1992 but Kamkar looked into a manual from a 2007 version and said he found that it uses a fixed code system (Greenberg, 2015).

         Samy Kamkar has posted the code that he used to crack the fixed code systems and the code that he used to crack the rolling code systems, but he intentionally sabotaged the code that he posted so malicious users would have a tough time getting the code to work.  To crack the codes Kamkar uses brute force to find the right code for each door.  The doors that he tested use at most 12 bits which is 4096 possible combinations (Greenberg, 2015).  Using a brute force technique to crack the code for these garage doors would take about 29 minutes but Kamkar improved it by taking out wait periods between trying generated codes, removing redundant transmissions, and optimization that allows transmission of overlapped codes (Greenberg, 2015).  With all of the optimizations that Kamkar has put into his code, he reduced the time of using brute force to crack the individual codes from 29 minutes down to 8 seconds (Kamkar).  Kamkar says you need to be an expert in RF signals and microcontrollers to be able to fix the code that he has posted on GitHub and use it.  Using brute force to crack a code usually takes a very long time to do because the algorithm tries every possible combination to find the right code.  I think it is quite the feat that Kamkar got the brute force algorithm down to only 8 seconds to find the code from the original 29 minutes.

         Preventing the ability to become victim to this kind of attack would require the user to actually completely change out their garage door opener.  Since the time of this discovery by Samy Kamkar, garage door opener manufacturers or designers should have made design decisions to implement greater security measures into their products.  This is not an example of poor security design by garage door companies because these garage doors were out for a very long time before the ability to hack into them became present.  It is and example of how fast technology is evolving and the need for greater security implementation grows with the technology.  These vulnerable garage door openers were tested with this hack using two-character codes, maybe companies will adopt a secure code encryption with like SHA1 or something like that.

         To conclude I would like to state that garage doors openers are just one of the many IOT devices in a person’s home today.  This number of IOT devices in a user’s home is increasing day by day.  More and more people are turning to the convenience of IOT devices.  Security vulnerabilities are a common thing with home IOT devices and companies need to ramp up their testing of security.  I think it is clear that the companies that allow these vulnerabilities to be present in their devices are going to lose more business and actually the whole business may go bankrupt as a result.  I think if a company is going to build an IOT device for use in a person’s home that there should be security testing guidelines and a certification for the product they are building so the end user knows exactly what they can expect from a security standpoint of their product.

Works Cited

Greenberg, A. (2015, June 4). This Hacked Kids’ Toy Opens Garage Doors in Seconds. Retrieved from

Kamkar, S. (n.d.). Open Sesame. Retrieved from

Smart Stove Hack that lets Hackers Turn on your Stove

While researching security vulnerabilities in IOT devices focusing on home security, smart ovens came to mind.  I thought what could be worse than a hacker getting into a person’s smart oven and burning their house down to the ground.  It turns out that there is an exact hack of this type in a smart oven. Security involving IOT devices is an important aspect in the safety of a user’s home and I actually couldn’t believe that there is a hack that would let a malicious user turn on an oven.  Say that a user had some dish towels or something on their oven range that is flammable and then they went off to work only to come home to their house burnt to a crisp because a hacker turned on their stove while they were gone.  As I research security vulnerabilities in IOT devices it is becoming more and more clear that the companies that produce these smart home gadgets are lacking very much in security testing of their devices.  In this document I will go over what oven has this type of security vulnerability, how this type of hack is executed, and what a user can do to prevent this type of security vulnerability from burning down their house.

         The smart oven that has this security vulnerability is the AGA Range Cooker.  The vulnerability was discovered by Pen Test Partners.  These range cookers are very expensive so you would think that the company would have done extensive security testing on their appliances but that doesn’t seem to be the case.  Pen Test Partners say that they tried to disclose the vulnerability to the AGA company through Twitter and AGA blocked them.  Pen Test Partners finally got through to AGA via their technical support (LAIDLAW, 2017).  It was important for Pen Test Partners to get in contact with AGA appliance company before they disclosed the vulnerability so that there could be something done about it before the information got into the wrong hands.  Something like this allowing a malicious user the ability to burn your house down could really tarnish a company’s name. The AGA company seems very reluctant to fix this vulnerability. This oven draws a maximum of 30 amps which is enough to burn a house down.  The owners of this type of oven should know that if they have the latest model with the remote control option then there is a possibility that they could be victim of this kind of attack (Leyden, 2017).

         The hack in this smart oven is executed by an SMS that is unauthenticated and is sent from the ovens mobile application running on the user’s phone.  The oven has a SIM card that costs the user around 5 dollars a month.  The user could send a command to turn on all of the burners at once and since the SMS from the mobile application is not authenticated that means that a malicious user can perform an enumeration attack.  Enumeration is a process to establish an active connection to a target machine to discover potential attacks (Chakravartula, 2018).  Once the malicious user slowly but effectively uses enumeration to find the smart ovens phone number they can just simply send an SMS command to it.  The command would look something like this “WebtextPass,35257,Baking Oven On” (Leyden, 2017).  The enumeration attack could potentially take a while to execute because it is like a brute force attack to obtain the smart ovens phone number.  AGA is being criticized by security testers because they say that making a WIFI interface would have been cheaper and safer than using a SIM card with a phone number for every device.  It is amazing to me that AGA designed a device with a SIM card and a phone number but totally lacked when it came to security testing the device.  I don’t even know if there is a patch that could be made for a device that is controlled this way.  Maybe that is why the AGA company is reluctant to provide a fix for their vulnerable devices.

         With this vulnerability disclosed to the public, a consumer should be very cautious when buying a smart appliance from AGA.  The company probably lost a considerable amount of business because of this problem and it’s their own fault.  I’m not sure if there is anything that an owner of one of these devices can do to prevent being a victim of this attack except for just throwing away their smart stove and getting a new one from a different manufacturer.  I guess the owner of this type of smart oven can remove the SIM card and only operate their stove the old-fashioned way.  They could probably cancel the remote access option on the smart stove since they are paying a monthly plan for it anyway.  If AGA doesn’t come up with a patch to this vulnerability then the owner of the smart stove pretty much has no other option than to disable the remote-control option, otherwise risk their house being burnt down by a malicious arson with too much time on their hands.

         To conclude, I would like to state that security testing should be a major part of a producer of IOT devices software engineering process.  This is especially the case for companies that make IOT devices that operate inside a user’s home.  This smart oven is just another of the many mistakes made by companies producing these smart devices.  With the outrageous price tag that AGA is putting on their smart ovens I’m sure that consumers would appreciate some sort of security certification or something of the sort to go along with the smart device so that they can have some type of assurance that the product was produced correctly with a substantial amount of security vulnerability testing done.

Works Cited

Chakravartula, R. (2018, February 28). What is Enumeration? Retrieved from


Leyden, J. (2017, Aprin 13). Half-baked security: Hackers can hijack your smart Aga oven ‘with a text message’. Retrieved from

Smart Lightbulb Vulnerabilities

In the past year people have spent about eight billion dollars on smart lightbulbs to conveniently illuminate their homes.  Over the next year, that price point is estimated to jump to about 28 billion because more and more people are turning to smart devices around their houses for the convenience that they provide (Min, 2019).  Smart lightbulbs seem simple when you think about them as far as functionality goes.  You can turn your light on, turn your light off, and with the smart lightbulbs today you can even have them change brightness or color according to music or video that is playing in the house, blending in with multimedia.  But little do the consumers know that there are flaws in the security layer of their smart lightbulbs.  Some smart lightbulbs specifically the ones that change brightness and color according to multimedia playing along with it can let hackers infer the actual media playing along with the light, like audio or video.  Some smart lights that have an infrared function, hackers have shown that a covert data exfiltration threat can be done with them (MAITI, 2019).  In this document I will first go into detail about the video-audio inference threat, details about the covert data exfiltration threat, and conclude with a section on preventive measures that can be taken so the user does not fall victim to these threats.

         The video and audio inference threat that is present in smart lightbulbs lets a malicious user know what song or video a user is playing along with the lightbulbs ability to change brightness according to the media that is playing.  This is a big problem because there is a law called the US Video Privacy Protection Act to prevent getting a user’s media information like this because it can reveal personal interests and preferences (MAITI, 2019).  While this threat is actually difficult to set up and exploit, it is still possible to do.  The smart lightbulbs in examination of this threat change brightness and hue according to the different media that is playing in conjunction with them.  It turns out that the audio waveform and the fluctuations in brightness in the smart lightbulb have similar graphs(MAITI, 2019) and with this information a malicious user, having a library of songs to compare the light fluctuation to, can infer the media type from. To achieve this inference, the malicious user needs a luminance meter and a library of media to reference.  For the difference in luminance when the hue option is used in the smart lightbulb, an RGB sensor should be used.  The researchers tested audio in intervals from 15 seconds to 120 seconds and as you would expect, the accuracy of inferring the media that the user is engaged in is greater the longer the observation. The same holds for video but the time intervals were from 60 seconds to 360 seconds (MAITI, 2019).  Inferring a user’s audio and video usage is a really dangerous threat.  Because there is a law protecting users’ privacy when it comes to media consumption, I think that this threat is potentially very dangerous.

         The covert data exfiltration threat is present in smart lightbulbs because in theory, any light can transmit data.  The research says that this threat is available on smart lights that do not have a hub connecting the lights or having a hub but without permission controls.  Using this threat, a malicious user can obtain data from an unsuspecting users’ private network.  The researchers tested obtaining data using the infrared light from a smart lightbulb sending strings and images through the network.  Using the infrared light from the bulb the researchers were able to get binary data from the different power levels of the bulb. 

Text at 15 meters

Original Text: A cup of sugar makes sweet fudge 

Reconstructed Text: A buq pf!sugbr m`kessuees hudfe 

As you can see, this is a very dangerous threat that is present in these smart lightbulbs.  Anyone can use this threat to obtain any sensitive information about the users over a private network.

         These threats in smart lightbulbs are actually very difficult to utilize.  Given the right tools and proximities, a malicious user may be able to use these threats to obtain personal information.  First, I would like to note proximity.  To prevent these threats from being executed in your home network, proximity is vital to the execution and extraction or inference of your data.  Be careful who you let into your private network or into your home.  If a malicious user is too far away from your devices the information obtained may be too degraded for malicious use by the time the malicious user gets the information.  Both of these threats can be done through a window.  I would say that curtains that do not let any light through them could be a good preventive measure taken against these threats.  You should buy smart lightbulbs that connect to a hub with permission controls.  The research says that lightbulbs connected to a hub with permission controls are not susceptible to the covert exfiltration threat.

         Although smart lightbulbs are very convenient to the extent that you no longer have to get up and go to a switch to turn them off and on, and they provide some extra features like musical or video lighting, they are prone to security vulnerabilities.  As I found out doing this research, even though light bulbs have very crude electronic circuitry and seem very simple, they evidently provide multiple access points for a user’s sensitive data to malicious users.  I don’t know if there could be any better of a software engineering practice to prevent these types of threats, but there has to be some type of remedy.  Security researchers are finding security holes just about as fast as the number of devices that are being released.  You cannot even trust a lightbulb these days.

Works Cited

MAITI, A. (2019, September). Light Ears: Information Leakage via Smart Lights . Retrieved from

Min, S. (2019, October 24). Are “smart” light bulbs a security risk? Retrieved from

Smart Lock Vulnerability

While researching home security issues with IOT devices I came across an article about a smart lock that is used in many homes that has a major security vulnerability giving hackers access to your home.  On a scale from one to ten where one is minimal security threat to ten being a major threat, I would say that a lock on the front door of your house that is pretty much useless would be a ten.  Researching home security in IOT devices is pretty interesting because nowadays more and more people are turning to use IOT smart devices to power their homes.  New IOT devices are coming out all of the time for home use but many of them have security flaws or vulnerabilities making them a threat to the safety of your home.  In this document I will go over what this smart lock is, how hackers can bypass the locking mechanism, and what is being done to prevent this vulnerability from letting the bad guys into your home.

         The smart lock that has this vulnerability is made by the company “KeyWe”.  It is a lock that is to be used to secure a user’s front door, or main entry to the house.  It can be locked or unlocked physically, using the application that comes with the lock, or through NFC on an armband (Marciniak, 2019).  The smart lock uses encryption for the digital keys that it transmits back and forth from the physical device and the application that the user controls it from.  There is even an option to have guest keys where the user can grant a guest access to the lock with the push of a button in the application.  All and all this smart lock seems like a nice device to have in your house and provides great convenience in managing the security of your home.  The problem is that a hacker can completely bypass all of the security measures of the device and application and gain access to the user’s house if they wanted to.

         A Finland based security company named F-Secure has discovered the security vulnerability of the lock letting hackers and unauthorized users gain access to your house through sniffing packets being sent between the lock and the application.  The problem is not with the encryption of the keys but the ability of the hacker to obtain the key before it is encrypted (Ng, 2019).  F-Secure labs has a web page for this specific hack and it shows you the teardown of the device naming all of the components and how to actually execute the hack, and it looks too easy (Marciniak, 2019).  With the use of a tool named Frida the security researchers could intercept all of the messages with information like name of the function being executed and which way the transmission was going e.g. From lock to application or application to lock.  Turns out that intercepting messages that are being sent between the lock and the application for the lock all you have to do is use a piece of hardware that has Bluetooth capability and the commonly used Wireshark application (Marciniak, 2019).  The hack is easy to execute if the hacker has the appropriate equipment which is relatively inexpensive and can be obtained by anyone.  The smart lock can be unlocked by anyone that really wants to get through the door that it is attached to, so what is KeyWe doing about it?

         According to the research I’ve done on this, the security engineers who discovered this hack at F-Secure Labs have disclosed this information to KeyWe right when they found out.  Since the hack was disclosed to KeyWe, the company says that they have resolved the problem.  The truth is that the problem cannot be fixed and that after speculation from security research engineers, KeyWe has advised the users of the lock that the security vulnerability cannot be fixed and that users should remove and replace the device with a newer smart lock which they say are now up to date.  KeyWe says that they take the security in their devices very seriously and their customers security is top priority (Ng, 2019).  Amazon has been notified about the flaw in the smart lock and declined to respond on whether they will still sell the product on their site.  Of all of the security vulnerabilities that I have read about so far, this is a major one.  There is not even any kind of fix for this vulnerability as users are advised to just remove the device from their homes.  The company KeyWe will most definitely lose many customers because of this and their lack of security practices.  Researchers at F-Secure Labs say that the hack was easy to figure out which shows a major lack of security testing by KeyWe on their products.

         Having a door lock that grants entry to anyone who has a key whether it was gained properly or not is a major deficit in the world of cyber security.  There are plenty of people out there who bought this lock only to find out some time later that anyone can get through the lock, even burglars.  This shows that companies need to focus much more on the security of their devices, especially if these devices are going to operate in their customers homes.  Computer security has been picking up as an industry lately and that is because of these types of flaws that security researchers are discovering every day.  There are so many security vulnerabilities in IOT devices and that is one of the main reasons for the surge in computer security research.  KeyWe should be ashamed of their software development process, especially their testing department to let such an obvious vulnerability happen in their smart lock.  I personally will remember the name KeyWe and I will definitely never purchase any of their products.

Works Cited

Marciniak, K. (2019, December 11). Digital lockpicking – stealing keys to the kingdom. Retrieved from

Ng, A. (2019, December 11). Smart lock has a security vulnerability that leaves homes open for attacks. Retrieved from

The Botnet Chamois in Mobile Devices

Doing research on home security vulnerabilities within IOT devices, I started to think about different kinds of hacks and malicious abilities that can pose a threat to mobile devices or IOT devices.  I thought that a bot net could potentially pose a major threat to home security through the different types of devices throughout a house.  Bot nets are capable of many different types of malicious attacks.  From collecting sensitive information to devising a denial of service attack, bot nets are a major security vulnerability that need to be addressed.  I heard about a bot net named Chamois that has been around for a while and keeps getting updated and distributed among mobile and IOT devices.  I decided to look into this specific bot net because I thought that I poses a major security risk in the area of mobile, IOT, and home security.  In this document I will go over what Chamois botnet is, how it infects devices, and what is being done to make sure that this botnet cannot spread to mobile devices.

         Chamois was a botnet that when on a device was controlled by a remote command and control server.  Once on a device it would serve malicious ads and directed users to premium SMS scams. Chamois was a very resilient botnet that could evade detection so good and evolved so rapidly that it took Google years to finally eradicate it from android devices (Rashid, 2019).  One way that Chamois was distributed to devices was through a developer advertising software development kit that was thought to be legitimate.  While developers not knowingly placed this malicious bot net code into users’ devices, Chamois appeared to be a mobile payments solution to device manufacturers (Rashid, 2019).  With the Chamois botnet intruding in users’ homes, the unfortunate users of devices infected with this botnet were robbed of their money if they fell for the SMS scams.  Some scams were about making donations and users did not know they were even scammed until they got their phone bills (Newman, 2019).  Botnets pose a major security risk when it comes to home security because a botnet literally breaks into your house through different mobile and IOT devices and attempts to steal your money.

         Once Chamois was able to be detected it evolved from four stages to six stages, being able to avoid anti-virus and malicious code detection software (Rashid, 2019).  Many applications on Google Play Store were infected with this botnet and Google security engineers had a very hard time trying to get rid of it.  Every time the Google security engineers figured out some sort of barrier to detect and get rid of the botnet, the makers of the botnet would figure out ways to get around the barriers (Rashid, 2019).  Chamois was a very resilient botnet that infected about 21 million devices and Google has eventually whittled that number down to around two million over the years (Newman, 2019).  From what I read about this specific botnet; it seems to me that it could still be in devices today just waiting around for the chance to strike.  Since this botnet was disguised as a software development kit there could have been many applications that were not even found to have it yet.  A botnet this powerful could even evolve to collect sensitive information about unsuspecting users.  I mean this botnet has evaded Googles best security engineers for years and years, which means that the developers of Chamois could have evolve the botnet in many different ways, even to make the security engineers think that they have defeated it as another way to evade detection and barriers.

         To prevent becoming a victim of the type of botnet that Chamois is, people will really have to rely on security researchers to be able to detect and remove it from mobile devices.  The type of scams that this botnet uses like premium SMS can be avoided by just never using SMS for transferring of money or credentials.  Sensitive information should never be shared over unsecure digital mediums, and premium SMS is as unsecure a medium as any to be used to transfer such information.  The articles I read about this say that Google has defeated this botnet, but for some reason I think that it could still be going around out there.  The articles said that security researchers have dwindled the infected numbers from about 20 million down to 2 million, but that means that 2 million devices are still infected which gives the Chamois botnet makers time to evolve and redistribute a greater and even more dangerous version of the bot net with even more malicious capabilities.  I think that this botnet is still a threat to mobile and home security all over the world.  I don’t know if there is a way to tell if the botnet will ever be completely eradicated.

         To keep homes safe from these kinds of botnets users will have to be knowledgeable in the types of malicious scams that it initiates.  Education might be the only safe bet when it comes to users not falling victim to these types of attacks.  If something seems fishy, then a user should automatically assume that it is some type of scam.  If you click an ad and are redirected to a sketchy looking site that is requesting some type of sensitive information, you should just delete the site or even turn off your device and definitely delete the application that redirected the user to the site.  Botnets may always pose a threat to unsuspecting users and they need to be educated to be able to avoid the situations that a malicious attacker may make arise.

Works Cited

Newman, L. H. (2019, April 19). How Android Fought an Epic Botnet—and Won. Retrieved from

Rashid, F. Y. (2019, April 9). CHAMOIS: THE BIG BOTNET YOU DIDN’T HEAR ABOUT. Retrieved from

Eavesdropping and Phishing Smart Assistants

Amazon Alexa and Google Home are the most used personal assistants in the world right now.  Their use is increasing very rapidly and research on security vulnerabilities involving these devices is providing some interesting hacks.  While researching vulnerabilities in home smart assistants I came across an article about hackers using the Google Home and Amazon Alexa to eavesdrop on unsuspecting users and even perform phishing using the same hack.   The hack is a form of third-party software that embeds malicious code into the home assistants.  In this document I will go over exactly how malicious developers utilize this hack to eavesdrop on unsuspecting users, what Amazon and Google are doing to prevent this type of malicious behavior within their devices, and what are some preventive measure you can take to make sure you do not fall victim of malicious third party software for your smart home assistant.

         Google and Amazon let developers make their own third-party actions or skills for their smart assistants.  For instance, a developer could make a calculator action or skill for a smart assistant where the user can ask the smart assistant to add two plus three.  There is a way for developers to design these skills or actions so that the assistant will keep listening even after the action or skill has completed its task.  The security researchers have made skills and actions that simulate silence by inserting the character sequence of “�. ” (U+D801, dot, space), and this allowed the developed actions or skills to keep listening to conversations in the background when the user thinks that the assistant has finished listening (Ng, 2019).  Both Google and Amazon assistants have an option to disclose your conversations with the assistants to improve the recognition of commands or phrases that a user might say to it.  With the eavesdropping hack mentioned above where the third-party skills or actions can keep listening in the background, whoever the third-party developer is that injected this malicious hack into the assistant can collect conversations while the user would not even know that it was recording.

         With this eavesdropping hack the developers have even worked out a way to do phishing for passwords.  They would design their skills or actions for the assistant to speak to the user something like “An important security update is available for your device. Please say ‘start update’ followed by your password.” (Ng, 2019).  Unsuspecting users that maybe have a little too much trust in their assistants might fall for this kind of phishing attack although Google and Amazon try to make it clear that you should never need to give your assistant your password.  Another thing about Google and Amazon telling users to never give their password to their assistant is a conflict with one of the resolutions to the laser hacking which is to have a password to give to the assistant for it to be able to process sensitive commands like purchases or unlocking doors.

         Google and Amazon both have a vetting process for developers who make applications for their smart assistants.  They say that after reviewing the researchers’ evidence that they have found and removed malicious applications that are of concern.  Even though both companies have their vetting process, it seems that the companies do not vet updates to already existing applications which would allow developers to make a simple application that abides by the standards.  Once it is approved, they could actually make an update to the application injecting the malicious code thereby bypassing the original vetting process (Porter, 2019).  Smart assistant makers like Google and Amazon say that they have a vetting process for not allowing specific skills or actions to be performed by their smart assistants, although security researchers have made these malicious apps that actually worked and it took time for Google and Amazon to remove them only after they were informed about the malicious behavior.  The security researchers were from SRLabs who figured out this eavesdropping and phishing vulnerabilities and before making the information public the disclosed everything to Google and Amazon (Porter, 2019).

         One way to prevent this kind of malicious behavior on your smart assistant is to not install third-party applications on your device.  That seems a little too excessive but there are potentially many malicious applications out there and it may pose a risk to your smart assistant.  Google and Amazon have settings that let you see what data has been used from your assistant and enable or disable certain actions or skills.  Users should keep track of what specific actions or skills that their smart assistants are utilizing, and I would say that if your smart assistant asks or prompts you for any sensitive information that you should definitely not disclose it.  There are many vulnerabilities in the smart assistants these days and they will have to be resolved by the makers of the devices.  Although these hacks do not seem to have been used by any third-party developers other than the security researchers at SRLabs, the consumer should always be careful about the information that they disclose to any type of electronic medium.  More and more people are using smart assistants because of the convenience that they provide for doing certain tasks and they need to be careful.

Works Cited

Ng, A. (2019, October 19). Alexa and Google Assistant fall victim to eavesdropping apps. Retrieved from

Porter, J. (2019, October 21). Security researchers expose new Alexa and Google Home vulnerability. Retrieved from

Laser Hacking Smart Assistants

In the news lately I have seen some articles suggested to me by Google on the topic of Lasers being able to hack into IOT devices like Google Home, Amazon Alexa, iPad, and pretty much anything with a microphone.  I decided to look into this topic because I think that the security of IOT devices and mobile devices is a very important topic in computer security.  According to the articles that I have read, it has been verified that lasers can send silent voice commands to devices with microphones.  Some devices are more susceptible than others when it comes to the range that a laser can actually work from.  In this document I will go into some detail about how a laser can send these silent voice commands, some statistics on the lasers effect on different devices, and some possible remedies to the hack.

         All of the devices have a type of microphone called MEMS (micro- electro-mechanical systems) microphone.  A gap was found between the physics and specifications of this type of microphone that allows light to be recognized as sound. By modulating the amplitude of the laser light, sound can be injected into the microphone. (Takeshi Sugawara, 2019)  At first I wondered how it is even possible that a laser beam consisting of light could inject voice commands into a device with a microphone.  Evidently when the laser is aimed at a microphone with the intensity at a precise frequency, the light would perturb the microphones membrane at that same frequency producing the actual digital signal through the microphone to be received and translated by the device it was sent to.  This was tested on many devices with microphones and everyone was susceptible to the laser.  The discovery of the lasers ability to manipulate a microphones membrane to produce electrical signals to be processed by the device was made by a cyber security researcher named Takeshi Sugawara.  He brought the discovery to the attention of a professor at the University of Michigan and they have been experimenting with it since. (Greenberg, 2019)

         Some of the devices that the hack was tested on by the researchers were Amazon Echo, Apple Home Pod, iPhone XR, Google Pixel 2, Samsung Galaxy S9, Facebook Portal Mini, etc. (Iyer, 2019)  Some devices were susceptible from up to 360 feet like Siri and other AI assistants.  The devices are even susceptible through windows.  Mobile phones were much more difficult to hack into with the lasers, but it was still possible with the range for the iPhone being about 33 feet and Android phones range being around 16 feet. All of these were done with a 60-milliwatt laser.  The researchers of the laser hack also tested the devices with a 5-milliwatt laser which is the equivalent of a cheap laser pointer that anyone can get.  From 361 feet away with the 5-milliwatt laser, most of the researcher’s tests failed except for Google Home and a first generation Echo Plus. (Greenberg, 2019)

         As for problems that may arise because of this newfound hack, I do not think that it is something that people should be causing pandemonium over.  This laser hack is very stealthy because the lasers are silent while they produce physical voice commands.  Google, Apple, and some other device manufacturers say that they are looking into the research closely.  Some day there could be a fix for the problem by making two microphones so the laser cannot penetrate both at the same time.  Another fix for the problem could be a password that only the users of the device are aware of.  With the password option it would be possible for sensitive commands like purchasing items to only be executed when given the password.  More remedies like placing your assistants away from the window were suggested since the laser hack can be done through a window, potentially letting the hacker access to unlocking your door or garage.  I guess as long as the microphone of your assistant is not visible from a window then it should be fine.

         It seems like it is a lot of work to be able to actually set up and execute a laser hack on any device.  I do not think that many people out there will be utilizing this hack just because of the complexity of setting it up.  Turning the voice command into a light signal seem very complicated to be able to do.  Luckily the hack was discovered by cyber security professional researchers and they are figuring out all of the details about it so that it cannot be used in a malicious way.  They disclosed all of their research so Google, Apple and other major manufacturers of the latest IOT devices can consider preventing these security vulnerabilities.

         To conclude I would like to mention that I think this hack is a very sophisticated one.  It is amazing that all of the IOT device designers and engineers totally overlooked this hacking ability.  IOT device makers will have to really rethink their designs and apply preventive measures for this security vulnerability.  It is not just one company that is making these devices that are susceptible to this laser hack security vulnerability, it is all of them.  Be it teamwork or whatever measures necessary, these companies need to put their heads together and really work out the problem at hand.

Works Cited

Greenberg, A. (2019, November 4). Retrieved from

Iyer, K. (2019, November 7). Retrieved from

Takeshi Sugawara, B. C. (2019, November 4). Light Commands: Laser-Based Audio Injection Attacks on Voice-Controllable Systems*. Retrieved from

Information Warfare: Closing Thoughts

This will be my last blog dealing with information warfare even though I may eventually pick back up and make more information warfare related blogs. I wanted to close with a view about information warfare and dealing with malware attacks like stuxnet. Stuxnet was a malicious program made in secret by the United States. Its main goal was to breech Iranian uranium enrichment plants security in an effort to disrupt the normal operations of the centrifuges. Now that sounds great in preventing other countries from obtaining nuclear weapons but how I see it is that using malware to attack other countries facilities could be a very dangerous game. Stuxnet was not supposed to be made public and for good reasons. The people who made stuxnet were very angry that it was spread around and made public, and this kind of carelessness when making malware to attack adversary’s facilities should not be taken lightly.

First of all, one thing that I have always thought of and fear the most about information warfare since I heard of stuxnet a couple years back is when if a country attacked another countries nuclear power plants. That would be very catastrophic if a country were able to cause a nuclear meltdown of a nuclear power plant with malware. We all know from experience in Chernobyl that a nuclear meltdown can be a very expensive thing to fix and can cause many deaths to civilians. That is my main concern about information warfare. Although information warfare will likely cause fewer deaths due to non-physical means of use, if the right malware were spread into the right facility it could be even more catastrophic than the effects of physical warfare. I hope that the United States and all of their allies work together in an effort to eradicate the world of such malware attacks used for information warfare.

Stuxnet was a marvel in malware and could be one of the most important information warfare lessons that any nation could learn from. While it did complete its objective and disable the Iranian uranium enrichment plants, it also got out into the public. Even though stuxnet getting out into the public did not cause any harm, the lesson learned from that type of malware is tenfold important to nations everywhere. If another stuxnet that was more lethal in the sense that it could attack a very volatile facility causing the deaths of many people were made, even if the objective was completed there could be a chance that it may backfire on the country that created it and cause a great deal of damage.

source :

Information Warfare: Cyber Command

The cyber command was created in 2009 to work as a department in battling cyber crimes in the United States. Since cyber crimes are increasing rapidly between allies and adversaries the demand of having a cyber command organization has also increased rapidly. If you think of all of the cyber attacks on different countries you will start to wonder what exactly is being done about this. President trump wants to elevate the Cyber Command organization to Unified Combatant Command and this view that Donald trump has is consistent with Defense Secretary Mattis. They both agree that cyber crimes is becoming more and more important of a situation to be dealing with as technology advances very rapidly.

The cyber command division will probably consist of over 6000 workers divided into approximately 133 teams. With those numbers, you can see that there are many different kinds of cyber threats that the United States has to deal with these days. I think that Donald trump is definitely on the right track beefing up the cyber crimes departments because this is the future and all that may be left someday is to reign supreme in the information warfare sector. All the different countries agree that loosing lives in physical warfare is unacceptable and are working to eliminate deaths by means of physical warfare. That leaves the question of who will dominate the information warfare sector as warfare settles down to a subtle game of chess.

New decisions are being made every day to elevate the position of cyber warfare divisions because of the ever-evolving threat of information technology. The president of the United States has a huge responsibility in preserving the security of America’s information infrastructure day by day. I think personally that the cyber command should be a very well funded organization in the United States, as it is partially responsible for national security from adversarial threats. It would not surprise me if the numbers of workers for the cyber command grows exponentially because that seems what the rate of technological evolution seems to be and we need at least a team of workers for every single technological advance that is made.

source :