Garage Door Opener

Researching security vulnerabilities in IOT devices, specifically dealing with the issue of home security I decided to look into garage door openers.  I came across this article that says there is a man who has developed a way to crack the code of the ordinary garage door.  This kind of hack could be disastrous to a homeowner because it lets a malicious user gain access to many houses that have an entrance to their house through the garage.  There is not just one garage door opener that is susceptible to this hack, many garage doors are.  I personally would not want anyone to be able to open my garage door and gain access to my house and with the knowledge of this research I am going to look into the brand and the garage door opener that I have to make sure that this vulnerability is not able to be done on my garage door.  Many people have these garage door openers that are susceptible to this hack and they may need to upgrade their garage door to prevent being a victim of this attack.  There are actually two kinds of codes that can be cracked by this security researcher’s method, the first one was discovered earlier than the other because the later is more difficult to hack.  These two kinds of codes are called “fixed code” and “rolling code” and there are many garage door openers that use these kinds of codes to allow access to your garage and possibly your house.  In this document I will go over what garage door openers are susceptible to this attack, how the attack is implemented, and what a user can do to prevent being a victim of this type of attack.

There are many brands of garage door openers that are susceptible to this hack.  Some of them include Nortek, NorthShore, Chamberlain, Liftmaster, Stanley, Delta-3, and Moore-O-Matic (Kamkar).  The attack was designed by Samy Kamkar and he states that only garage doors with fixed code entry systems are vulnerable to this attack although he has also developed a hack for garage doors with the rolling code entry systems.  Samy Kamkar has presented his rolling code hack for garage doors at DEFCON 23.  The difference between fixed code and rolling code is that the fixed code system uses the same two-character code every time to open the door and rolling code changes the two-character code every time the garage door is opened.  There are other kinds of systems that use hopping codes, Security+, or Intellicode and Samy Kamkar says that these doors may be safer from this kind of hack but they are not foolproof (Kamkar). has requested comment from companies like Nortek and Genie but they didn’t respond at first about the vulnerability and then posted on their website it is stated that they use rolling codes (Greenberg, 2015).  Later on, after Kamkar figured out how to hack fixed code systems he also found out how to hack rolling code systems.  The owners of the Liftmaster brand of garage door openers stated that they have not used fixed code systems since 1992 but Kamkar looked into a manual from a 2007 version and said he found that it uses a fixed code system (Greenberg, 2015).

         Samy Kamkar has posted the code that he used to crack the fixed code systems and the code that he used to crack the rolling code systems, but he intentionally sabotaged the code that he posted so malicious users would have a tough time getting the code to work.  To crack the codes Kamkar uses brute force to find the right code for each door.  The doors that he tested use at most 12 bits which is 4096 possible combinations (Greenberg, 2015).  Using a brute force technique to crack the code for these garage doors would take about 29 minutes but Kamkar improved it by taking out wait periods between trying generated codes, removing redundant transmissions, and optimization that allows transmission of overlapped codes (Greenberg, 2015).  With all of the optimizations that Kamkar has put into his code, he reduced the time of using brute force to crack the individual codes from 29 minutes down to 8 seconds (Kamkar).  Kamkar says you need to be an expert in RF signals and microcontrollers to be able to fix the code that he has posted on GitHub and use it.  Using brute force to crack a code usually takes a very long time to do because the algorithm tries every possible combination to find the right code.  I think it is quite the feat that Kamkar got the brute force algorithm down to only 8 seconds to find the code from the original 29 minutes.

         Preventing the ability to become victim to this kind of attack would require the user to actually completely change out their garage door opener.  Since the time of this discovery by Samy Kamkar, garage door opener manufacturers or designers should have made design decisions to implement greater security measures into their products.  This is not an example of poor security design by garage door companies because these garage doors were out for a very long time before the ability to hack into them became present.  It is and example of how fast technology is evolving and the need for greater security implementation grows with the technology.  These vulnerable garage door openers were tested with this hack using two-character codes, maybe companies will adopt a secure code encryption with like SHA1 or something like that.

         To conclude I would like to state that garage doors openers are just one of the many IOT devices in a person’s home today.  This number of IOT devices in a user’s home is increasing day by day.  More and more people are turning to the convenience of IOT devices.  Security vulnerabilities are a common thing with home IOT devices and companies need to ramp up their testing of security.  I think it is clear that the companies that allow these vulnerabilities to be present in their devices are going to lose more business and actually the whole business may go bankrupt as a result.  I think if a company is going to build an IOT device for use in a person’s home that there should be security testing guidelines and a certification for the product they are building so the end user knows exactly what they can expect from a security standpoint of their product.

Works Cited

Greenberg, A. (2015, June 4). This Hacked Kids’ Toy Opens Garage Doors in Seconds. Retrieved from

Kamkar, S. (n.d.). Open Sesame. Retrieved from

Your Donation

privacy policy for genie lamp free

Privacy Policy

built the Genie Lamp Free app as an Ad Supported app. This SERVICE is provided by at no cost and is intended for use as is.

This page is used to inform visitors regarding my policies with the collection, use, and disclosure of Personal Information if anyone decided to use my Service.

If you choose to use my Service, then you agree to the collection and use of information in relation to this policy. The Personal Information that I collect is used for providing and improving the Service. I will not use or share your information with anyone except as described in this Privacy Policy.

The terms used in this Privacy Policy have the same meanings as in our Terms and Conditions, which is accessible at Genie Lamp Free unless otherwise defined in this Privacy Policy.

Information Collection and Use

For a better experience, while using our Service, I may require you to provide us with certain personally identifiable information. The information that I request will be retained on your device and is not collected by me in any way.

The app does use third party services that may collect information used to identify you.

Link to privacy policy of third party service providers used by the app

Log Data

I want to inform you that whenever you use my Service, in a case of an error in the app I collect data and information (through third party products) on your phone called Log Data. This Log Data may include information such as your device Internet Protocol (“IP”) address, device name, operating system version, the configuration of the app when utilizing my Service, the time and date of your use of the Service, and other statistics.


Cookies are files with a small amount of data that are commonly used as anonymous unique identifiers. These are sent to your browser from the websites that you visit and are stored on your device’s internal memory.

This Service does not use these “cookies” explicitly. However, the app may use third party code and libraries that use “cookies” to collect information and improve their services. You have the option to either accept or refuse these cookies and know when a cookie is being sent to your device. If you choose to refuse our cookies, you may not be able to use some portions of this Service.

Service Providers

I may employ third-party companies and individuals due to the following reasons:

  • To facilitate our Service;
  • To provide the Service on our behalf;
  • To perform Service-related services; or
  • To assist us in analyzing how our Service is used.

I want to inform users of this Service that these third parties have access to your Personal Information. The reason is to perform the tasks assigned to them on our behalf. However, they are obligated not to disclose or use the information for any other purpose.


I value your trust in providing us your Personal Information, thus we are striving to use commercially acceptable means of protecting it. But remember that no method of transmission over the internet, or method of electronic storage is 100% secure and reliable, and I cannot guarantee its absolute security.

Links to Other Sites

This Service may contain links to other sites. If you click on a third-party link, you will be directed to that site. Note that these external sites are not operated by me. Therefore, I strongly advise you to review the Privacy Policy of these websites. I have no control over and assume no responsibility for the content, privacy policies, or practices of any third-party sites or services.

Children’s Privacy

These Services do not address anyone under the age of 13. I do not knowingly collect personally identifiable information from children under 13. In the case I discover that a child under 13 has provided me with personal information, I immediately delete this from our servers. If you are a parent or guardian and you are aware that your child has provided us with personal information, please contact me so that I will be able to do necessary actions.

Changes to This Privacy Policy

I may update our Privacy Policy from time to time. Thus, you are advised to review this page periodically for any changes. I will notify you of any changes by posting the new Privacy Policy on this page. These changes are effective immediately after they are posted on this page.

Contact Us

If you have any questions or suggestions about my Privacy Policy, do not hesitate to contact me at

This privacy policy page was created at and modified/generated by App Privacy Policy Generator

Smart Stove Hack that lets Hackers Turn on your Stove

While researching security vulnerabilities in IOT devices focusing on home security, smart ovens came to mind.  I thought what could be worse than a hacker getting into a person’s smart oven and burning their house down to the ground.  It turns out that there is an exact hack of this type in a smart oven. Security involving IOT devices is an important aspect in the safety of a user’s home and I actually couldn’t believe that there is a hack that would let a malicious user turn on an oven.  Say that a user had some dish towels or something on their oven range that is flammable and then they went off to work only to come home to their house burnt to a crisp because a hacker turned on their stove while they were gone.  As I research security vulnerabilities in IOT devices it is becoming more and more clear that the companies that produce these smart home gadgets are lacking very much in security testing of their devices.  In this document I will go over what oven has this type of security vulnerability, how this type of hack is executed, and what a user can do to prevent this type of security vulnerability from burning down their house.

         The smart oven that has this security vulnerability is the AGA Range Cooker.  The vulnerability was discovered by Pen Test Partners.  These range cookers are very expensive so you would think that the company would have done extensive security testing on their appliances but that doesn’t seem to be the case.  Pen Test Partners say that they tried to disclose the vulnerability to the AGA company through Twitter and AGA blocked them.  Pen Test Partners finally got through to AGA via their technical support (LAIDLAW, 2017).  It was important for Pen Test Partners to get in contact with AGA appliance company before they disclosed the vulnerability so that there could be something done about it before the information got into the wrong hands.  Something like this allowing a malicious user the ability to burn your house down could really tarnish a company’s name. The AGA company seems very reluctant to fix this vulnerability. This oven draws a maximum of 30 amps which is enough to burn a house down.  The owners of this type of oven should know that if they have the latest model with the remote control option then there is a possibility that they could be victim of this kind of attack (Leyden, 2017).

         The hack in this smart oven is executed by an SMS that is unauthenticated and is sent from the ovens mobile application running on the user’s phone.  The oven has a SIM card that costs the user around 5 dollars a month.  The user could send a command to turn on all of the burners at once and since the SMS from the mobile application is not authenticated that means that a malicious user can perform an enumeration attack.  Enumeration is a process to establish an active connection to a target machine to discover potential attacks (Chakravartula, 2018).  Once the malicious user slowly but effectively uses enumeration to find the smart ovens phone number they can just simply send an SMS command to it.  The command would look something like this “WebtextPass,35257,Baking Oven On” (Leyden, 2017).  The enumeration attack could potentially take a while to execute because it is like a brute force attack to obtain the smart ovens phone number.  AGA is being criticized by security testers because they say that making a WIFI interface would have been cheaper and safer than using a SIM card with a phone number for every device.  It is amazing to me that AGA designed a device with a SIM card and a phone number but totally lacked when it came to security testing the device.  I don’t even know if there is a patch that could be made for a device that is controlled this way.  Maybe that is why the AGA company is reluctant to provide a fix for their vulnerable devices.

         With this vulnerability disclosed to the public, a consumer should be very cautious when buying a smart appliance from AGA.  The company probably lost a considerable amount of business because of this problem and it’s their own fault.  I’m not sure if there is anything that an owner of one of these devices can do to prevent being a victim of this attack except for just throwing away their smart stove and getting a new one from a different manufacturer.  I guess the owner of this type of smart oven can remove the SIM card and only operate their stove the old-fashioned way.  They could probably cancel the remote access option on the smart stove since they are paying a monthly plan for it anyway.  If AGA doesn’t come up with a patch to this vulnerability then the owner of the smart stove pretty much has no other option than to disable the remote-control option, otherwise risk their house being burnt down by a malicious arson with too much time on their hands.

         To conclude, I would like to state that security testing should be a major part of a producer of IOT devices software engineering process.  This is especially the case for companies that make IOT devices that operate inside a user’s home.  This smart oven is just another of the many mistakes made by companies producing these smart devices.  With the outrageous price tag that AGA is putting on their smart ovens I’m sure that consumers would appreciate some sort of security certification or something of the sort to go along with the smart device so that they can have some type of assurance that the product was produced correctly with a substantial amount of security vulnerability testing done.

Works Cited

Chakravartula, R. (2018, February 28). What is Enumeration? Retrieved from


Leyden, J. (2017, Aprin 13). Half-baked security: Hackers can hijack your smart Aga oven ‘with a text message’. Retrieved from

Your Donation

Smart Refrigerator Security Vulnerability

These days we are in the IOT revolution.  Everyone is flocking to the electronic stores to purchase smart appliances, so they have more convenience in their everyday lives.  Security vulnerabilities are being found by security researchers constantly as these new smart devices find their way to the stores.  It seems that for every IOT device that is released there is a corresponding security threat that seems to be discovered.  It turns out that even a smart refrigerator could be vulnerable to malicious people trying to obtain a user’s personal information.  While researching smart refrigerator vulnerabilities I came across a hack that lets malicious users obtain a user’s Google login credentials and I thought that this hack is definitely noteworthy.  In this document I will go over who discovered this smart refrigerator vulnerability, details on how this vulnerability is utilized, and what a user can do to prevent being a victim of this security vulnerability.

         This hack to find out a user’s Google login credentials through the Samsung smart refrigerator was discovered by security researchers at a security company named Pen Test Partners.  These security researchers discovered this hack at an IOT hacking challenge called the Def Con Security Conference (Neagle, 2015).  Security researchers at Pen Test Partners went through a bunch of different routes to find vulnerabilities in the Samsung smart refrigerator like firmware attacks, tearing down the mobile app, and TCP services (Venda, 2015).  Where the security researchers found the vulnerability was in the smart refrigerators implementation of SSL because it failed to validate the SSL certificates.  Since the refrigerator failed to validate the SSL certificates, that led to the ability of performing a man in the middle attack allowing a malicious user to obtain Google login credentials because the refrigerator has a Google calendar application on it letting a user post calendar events and notes on the door of the refrigerator.  Having a Google calendar on the door of your refrigerator sounds like a great idea and could be very convenient in organization of tasks and meetings for a user’s family.  Unfortunately, the hack discovered by Pen Test Partners makes the Google Calendar a prime target for the user’s personal information.

         This smart refrigerator hack is basically a man in the middle attack.  A man in the middle attack is when a malicious user is listening for packets between a device and servers communications.  Since the SSL implementation in the Samsung smart refrigerator does not validate the SSL certificates, that means that anyone can intercept the information being exchanged by the refrigerator and the server with a packet sniffer like Wireshark.  Packet sniffers like Wireshark can intercept information being transmitted over a network, specifically unencrypted information (Nohe, 2018).  This hack could be the result of the lack of security testing on the Samsung smart refrigerator where the developers of the refrigerators smart abilities just did not know how to implement SSL correctly.  It seems that this hack could be easily fixed with a software update and Samsung has reported that they are looking into the vulnerability (Neagle, 2015).  Although having your Google credentials exposed to a malicious user could be a very terrible thing, the malicious user would have to be able to have access to the same network that the smart refrigerator is a part of to be able to execute this attack.

         Personally, I would love to have a refrigerator with the kind of functionality that this Samsung smart refrigerator has.  The convenience of having my Google calendar presented on the door of the refrigerator with all of my notes and to-do lists could be very beneficial.  The first thing you would have to do to prevent this kind of hack from victimizing you is that you have to be very aware of who has access to the network that your refrigerator is running on.  I am sure that once Samsung was notified about this vulnerability that they made some updates to the refrigerators system software.  Always keep your IOT devices software up to date with the latest software because that is how many security vulnerabilities are combatted.  It is a shame that this kind of vulnerability was present in this smart refrigerator because a user’s Google credentials should always be kept confidential and the ability to do a man in the middle attack on a smart refrigerator should be addressed immediately.

         Although the man in the middle attack on this smart refrigerator doesn’t seem like a very severe security threat, it is still nonetheless a pretty substantial vulnerability. No one wants their personal information exposed to any malicious users in the technological world and this hack gave malicious users yet another way to deceive the regular users of IOT devices.  As I do more and more research on IOT devices and their vulnerabilities, it seems that company’s software engineering practices need to implement more security testing.  Samsung is a very big corporation with many customers, and I am sure that they already do plenty of security testing, but this is evidence that even the larger companies need to ramp up their security practices.

Works Cited

Neagle, C. (2015, August 26). Smart refrigerator hack exposes Gmail login credentials. Retrieved from

Nohe, P. (2018, November 29). Executing a Man-in-the-Middle Attack in just 15 Minutes. Retrieved from

Venda, P. (2015, August 18). Hacking DefCon 23’s IoT Village Samsung fridge. Retrieved from

Your Donation

Smart Lightbulb Vulnerabilities

In the past year people have spent about eight billion dollars on smart lightbulbs to conveniently illuminate their homes.  Over the next year, that price point is estimated to jump to about 28 billion because more and more people are turning to smart devices around their houses for the convenience that they provide (Min, 2019).  Smart lightbulbs seem simple when you think about them as far as functionality goes.  You can turn your light on, turn your light off, and with the smart lightbulbs today you can even have them change brightness or color according to music or video that is playing in the house, blending in with multimedia.  But little do the consumers know that there are flaws in the security layer of their smart lightbulbs.  Some smart lightbulbs specifically the ones that change brightness and color according to multimedia playing along with it can let hackers infer the actual media playing along with the light, like audio or video.  Some smart lights that have an infrared function, hackers have shown that a covert data exfiltration threat can be done with them (MAITI, 2019).  In this document I will first go into detail about the video-audio inference threat, details about the covert data exfiltration threat, and conclude with a section on preventive measures that can be taken so the user does not fall victim to these threats.

         The video and audio inference threat that is present in smart lightbulbs lets a malicious user know what song or video a user is playing along with the lightbulbs ability to change brightness according to the media that is playing.  This is a big problem because there is a law called the US Video Privacy Protection Act to prevent getting a user’s media information like this because it can reveal personal interests and preferences (MAITI, 2019).  While this threat is actually difficult to set up and exploit, it is still possible to do.  The smart lightbulbs in examination of this threat change brightness and hue according to the different media that is playing in conjunction with them.  It turns out that the audio waveform and the fluctuations in brightness in the smart lightbulb have similar graphs(MAITI, 2019) and with this information a malicious user, having a library of songs to compare the light fluctuation to, can infer the media type from. To achieve this inference, the malicious user needs a luminance meter and a library of media to reference.  For the difference in luminance when the hue option is used in the smart lightbulb, an RGB sensor should be used.  The researchers tested audio in intervals from 15 seconds to 120 seconds and as you would expect, the accuracy of inferring the media that the user is engaged in is greater the longer the observation. The same holds for video but the time intervals were from 60 seconds to 360 seconds (MAITI, 2019).  Inferring a user’s audio and video usage is a really dangerous threat.  Because there is a law protecting users’ privacy when it comes to media consumption, I think that this threat is potentially very dangerous.

         The covert data exfiltration threat is present in smart lightbulbs because in theory, any light can transmit data.  The research says that this threat is available on smart lights that do not have a hub connecting the lights or having a hub but without permission controls.  Using this threat, a malicious user can obtain data from an unsuspecting users’ private network.  The researchers tested obtaining data using the infrared light from a smart lightbulb sending strings and images through the network.  Using the infrared light from the bulb the researchers were able to get binary data from the different power levels of the bulb. 

Text at 15 meters

Original Text: A cup of sugar makes sweet fudge 

Reconstructed Text: A buq pf!sugbr m`kessuees hudfe 

As you can see, this is a very dangerous threat that is present in these smart lightbulbs.  Anyone can use this threat to obtain any sensitive information about the users over a private network.

         These threats in smart lightbulbs are actually very difficult to utilize.  Given the right tools and proximities, a malicious user may be able to use these threats to obtain personal information.  First, I would like to note proximity.  To prevent these threats from being executed in your home network, proximity is vital to the execution and extraction or inference of your data.  Be careful who you let into your private network or into your home.  If a malicious user is too far away from your devices the information obtained may be too degraded for malicious use by the time the malicious user gets the information.  Both of these threats can be done through a window.  I would say that curtains that do not let any light through them could be a good preventive measure taken against these threats.  You should buy smart lightbulbs that connect to a hub with permission controls.  The research says that lightbulbs connected to a hub with permission controls are not susceptible to the covert exfiltration threat.

         Although smart lightbulbs are very convenient to the extent that you no longer have to get up and go to a switch to turn them off and on, and they provide some extra features like musical or video lighting, they are prone to security vulnerabilities.  As I found out doing this research, even though light bulbs have very crude electronic circuitry and seem very simple, they evidently provide multiple access points for a user’s sensitive data to malicious users.  I don’t know if there could be any better of a software engineering practice to prevent these types of threats, but there has to be some type of remedy.  Security researchers are finding security holes just about as fast as the number of devices that are being released.  You cannot even trust a lightbulb these days.

Works Cited

MAITI, A. (2019, September). Light Ears: Information Leakage via Smart Lights . Retrieved from

Min, S. (2019, October 24). Are “smart” light bulbs a security risk? Retrieved from

Your Donation

Smart Lock Vulnerability

While researching home security issues with IOT devices I came across an article about a smart lock that is used in many homes that has a major security vulnerability giving hackers access to your home.  On a scale from one to ten where one is minimal security threat to ten being a major threat, I would say that a lock on the front door of your house that is pretty much useless would be a ten.  Researching home security in IOT devices is pretty interesting because nowadays more and more people are turning to use IOT smart devices to power their homes.  New IOT devices are coming out all of the time for home use but many of them have security flaws or vulnerabilities making them a threat to the safety of your home.  In this document I will go over what this smart lock is, how hackers can bypass the locking mechanism, and what is being done to prevent this vulnerability from letting the bad guys into your home.

         The smart lock that has this vulnerability is made by the company “KeyWe”.  It is a lock that is to be used to secure a user’s front door, or main entry to the house.  It can be locked or unlocked physically, using the application that comes with the lock, or through NFC on an armband (Marciniak, 2019).  The smart lock uses encryption for the digital keys that it transmits back and forth from the physical device and the application that the user controls it from.  There is even an option to have guest keys where the user can grant a guest access to the lock with the push of a button in the application.  All and all this smart lock seems like a nice device to have in your house and provides great convenience in managing the security of your home.  The problem is that a hacker can completely bypass all of the security measures of the device and application and gain access to the user’s house if they wanted to.

         A Finland based security company named F-Secure has discovered the security vulnerability of the lock letting hackers and unauthorized users gain access to your house through sniffing packets being sent between the lock and the application.  The problem is not with the encryption of the keys but the ability of the hacker to obtain the key before it is encrypted (Ng, 2019).  F-Secure labs has a web page for this specific hack and it shows you the teardown of the device naming all of the components and how to actually execute the hack, and it looks too easy (Marciniak, 2019).  With the use of a tool named Frida the security researchers could intercept all of the messages with information like name of the function being executed and which way the transmission was going e.g. From lock to application or application to lock.  Turns out that intercepting messages that are being sent between the lock and the application for the lock all you have to do is use a piece of hardware that has Bluetooth capability and the commonly used Wireshark application (Marciniak, 2019).  The hack is easy to execute if the hacker has the appropriate equipment which is relatively inexpensive and can be obtained by anyone.  The smart lock can be unlocked by anyone that really wants to get through the door that it is attached to, so what is KeyWe doing about it?

         According to the research I’ve done on this, the security engineers who discovered this hack at F-Secure Labs have disclosed this information to KeyWe right when they found out.  Since the hack was disclosed to KeyWe, the company says that they have resolved the problem.  The truth is that the problem cannot be fixed and that after speculation from security research engineers, KeyWe has advised the users of the lock that the security vulnerability cannot be fixed and that users should remove and replace the device with a newer smart lock which they say are now up to date.  KeyWe says that they take the security in their devices very seriously and their customers security is top priority (Ng, 2019).  Amazon has been notified about the flaw in the smart lock and declined to respond on whether they will still sell the product on their site.  Of all of the security vulnerabilities that I have read about so far, this is a major one.  There is not even any kind of fix for this vulnerability as users are advised to just remove the device from their homes.  The company KeyWe will most definitely lose many customers because of this and their lack of security practices.  Researchers at F-Secure Labs say that the hack was easy to figure out which shows a major lack of security testing by KeyWe on their products.

         Having a door lock that grants entry to anyone who has a key whether it was gained properly or not is a major deficit in the world of cyber security.  There are plenty of people out there who bought this lock only to find out some time later that anyone can get through the lock, even burglars.  This shows that companies need to focus much more on the security of their devices, especially if these devices are going to operate in their customers homes.  Computer security has been picking up as an industry lately and that is because of these types of flaws that security researchers are discovering every day.  There are so many security vulnerabilities in IOT devices and that is one of the main reasons for the surge in computer security research.  KeyWe should be ashamed of their software development process, especially their testing department to let such an obvious vulnerability happen in their smart lock.  I personally will remember the name KeyWe and I will definitely never purchase any of their products.

Works Cited

Marciniak, K. (2019, December 11). Digital lockpicking – stealing keys to the kingdom. Retrieved from

Ng, A. (2019, December 11). Smart lock has a security vulnerability that leaves homes open for attacks. Retrieved from

Your Donation

The Botnet Chamois in Mobile Devices

Doing research on home security vulnerabilities within IOT devices, I started to think about different kinds of hacks and malicious abilities that can pose a threat to mobile devices or IOT devices.  I thought that a bot net could potentially pose a major threat to home security through the different types of devices throughout a house.  Bot nets are capable of many different types of malicious attacks.  From collecting sensitive information to devising a denial of service attack, bot nets are a major security vulnerability that need to be addressed.  I heard about a bot net named Chamois that has been around for a while and keeps getting updated and distributed among mobile and IOT devices.  I decided to look into this specific bot net because I thought that I poses a major security risk in the area of mobile, IOT, and home security.  In this document I will go over what Chamois botnet is, how it infects devices, and what is being done to make sure that this botnet cannot spread to mobile devices.

         Chamois was a botnet that when on a device was controlled by a remote command and control server.  Once on a device it would serve malicious ads and directed users to premium SMS scams. Chamois was a very resilient botnet that could evade detection so good and evolved so rapidly that it took Google years to finally eradicate it from android devices (Rashid, 2019).  One way that Chamois was distributed to devices was through a developer advertising software development kit that was thought to be legitimate.  While developers not knowingly placed this malicious bot net code into users’ devices, Chamois appeared to be a mobile payments solution to device manufacturers (Rashid, 2019).  With the Chamois botnet intruding in users’ homes, the unfortunate users of devices infected with this botnet were robbed of their money if they fell for the SMS scams.  Some scams were about making donations and users did not know they were even scammed until they got their phone bills (Newman, 2019).  Botnets pose a major security risk when it comes to home security because a botnet literally breaks into your house through different mobile and IOT devices and attempts to steal your money.

         Once Chamois was able to be detected it evolved from four stages to six stages, being able to avoid anti-virus and malicious code detection software (Rashid, 2019).  Many applications on Google Play Store were infected with this botnet and Google security engineers had a very hard time trying to get rid of it.  Every time the Google security engineers figured out some sort of barrier to detect and get rid of the botnet, the makers of the botnet would figure out ways to get around the barriers (Rashid, 2019).  Chamois was a very resilient botnet that infected about 21 million devices and Google has eventually whittled that number down to around two million over the years (Newman, 2019).  From what I read about this specific botnet; it seems to me that it could still be in devices today just waiting around for the chance to strike.  Since this botnet was disguised as a software development kit there could have been many applications that were not even found to have it yet.  A botnet this powerful could even evolve to collect sensitive information about unsuspecting users.  I mean this botnet has evaded Googles best security engineers for years and years, which means that the developers of Chamois could have evolve the botnet in many different ways, even to make the security engineers think that they have defeated it as another way to evade detection and barriers.

         To prevent becoming a victim of the type of botnet that Chamois is, people will really have to rely on security researchers to be able to detect and remove it from mobile devices.  The type of scams that this botnet uses like premium SMS can be avoided by just never using SMS for transferring of money or credentials.  Sensitive information should never be shared over unsecure digital mediums, and premium SMS is as unsecure a medium as any to be used to transfer such information.  The articles I read about this say that Google has defeated this botnet, but for some reason I think that it could still be going around out there.  The articles said that security researchers have dwindled the infected numbers from about 20 million down to 2 million, but that means that 2 million devices are still infected which gives the Chamois botnet makers time to evolve and redistribute a greater and even more dangerous version of the bot net with even more malicious capabilities.  I think that this botnet is still a threat to mobile and home security all over the world.  I don’t know if there is a way to tell if the botnet will ever be completely eradicated.

         To keep homes safe from these kinds of botnets users will have to be knowledgeable in the types of malicious scams that it initiates.  Education might be the only safe bet when it comes to users not falling victim to these types of attacks.  If something seems fishy, then a user should automatically assume that it is some type of scam.  If you click an ad and are redirected to a sketchy looking site that is requesting some type of sensitive information, you should just delete the site or even turn off your device and definitely delete the application that redirected the user to the site.  Botnets may always pose a threat to unsuspecting users and they need to be educated to be able to avoid the situations that a malicious attacker may make arise.

Works Cited

Newman, L. H. (2019, April 19). How Android Fought an Epic Botnet—and Won. Retrieved from

Rashid, F. Y. (2019, April 9). CHAMOIS: THE BIG BOTNET YOU DIDN’T HEAR ABOUT. Retrieved from

Your Donation

Eavesdropping and Phishing Smart Assistants

Amazon Alexa and Google Home are the most used personal assistants in the world right now.  Their use is increasing very rapidly and research on security vulnerabilities involving these devices is providing some interesting hacks.  While researching vulnerabilities in home smart assistants I came across an article about hackers using the Google Home and Amazon Alexa to eavesdrop on unsuspecting users and even perform phishing using the same hack.   The hack is a form of third-party software that embeds malicious code into the home assistants.  In this document I will go over exactly how malicious developers utilize this hack to eavesdrop on unsuspecting users, what Amazon and Google are doing to prevent this type of malicious behavior within their devices, and what are some preventive measure you can take to make sure you do not fall victim of malicious third party software for your smart home assistant.

         Google and Amazon let developers make their own third-party actions or skills for their smart assistants.  For instance, a developer could make a calculator action or skill for a smart assistant where the user can ask the smart assistant to add two plus three.  There is a way for developers to design these skills or actions so that the assistant will keep listening even after the action or skill has completed its task.  The security researchers have made skills and actions that simulate silence by inserting the character sequence of “�. ” (U+D801, dot, space), and this allowed the developed actions or skills to keep listening to conversations in the background when the user thinks that the assistant has finished listening (Ng, 2019).  Both Google and Amazon assistants have an option to disclose your conversations with the assistants to improve the recognition of commands or phrases that a user might say to it.  With the eavesdropping hack mentioned above where the third-party skills or actions can keep listening in the background, whoever the third-party developer is that injected this malicious hack into the assistant can collect conversations while the user would not even know that it was recording.

         With this eavesdropping hack the developers have even worked out a way to do phishing for passwords.  They would design their skills or actions for the assistant to speak to the user something like “An important security update is available for your device. Please say ‘start update’ followed by your password.” (Ng, 2019).  Unsuspecting users that maybe have a little too much trust in their assistants might fall for this kind of phishing attack although Google and Amazon try to make it clear that you should never need to give your assistant your password.  Another thing about Google and Amazon telling users to never give their password to their assistant is a conflict with one of the resolutions to the laser hacking which is to have a password to give to the assistant for it to be able to process sensitive commands like purchases or unlocking doors.

         Google and Amazon both have a vetting process for developers who make applications for their smart assistants.  They say that after reviewing the researchers’ evidence that they have found and removed malicious applications that are of concern.  Even though both companies have their vetting process, it seems that the companies do not vet updates to already existing applications which would allow developers to make a simple application that abides by the standards.  Once it is approved, they could actually make an update to the application injecting the malicious code thereby bypassing the original vetting process (Porter, 2019).  Smart assistant makers like Google and Amazon say that they have a vetting process for not allowing specific skills or actions to be performed by their smart assistants, although security researchers have made these malicious apps that actually worked and it took time for Google and Amazon to remove them only after they were informed about the malicious behavior.  The security researchers were from SRLabs who figured out this eavesdropping and phishing vulnerabilities and before making the information public the disclosed everything to Google and Amazon (Porter, 2019).

         One way to prevent this kind of malicious behavior on your smart assistant is to not install third-party applications on your device.  That seems a little too excessive but there are potentially many malicious applications out there and it may pose a risk to your smart assistant.  Google and Amazon have settings that let you see what data has been used from your assistant and enable or disable certain actions or skills.  Users should keep track of what specific actions or skills that their smart assistants are utilizing, and I would say that if your smart assistant asks or prompts you for any sensitive information that you should definitely not disclose it.  There are many vulnerabilities in the smart assistants these days and they will have to be resolved by the makers of the devices.  Although these hacks do not seem to have been used by any third-party developers other than the security researchers at SRLabs, the consumer should always be careful about the information that they disclose to any type of electronic medium.  More and more people are using smart assistants because of the convenience that they provide for doing certain tasks and they need to be careful.

Works Cited

Ng, A. (2019, October 19). Alexa and Google Assistant fall victim to eavesdropping apps. Retrieved from

Porter, J. (2019, October 21). Security researchers expose new Alexa and Google Home vulnerability. Retrieved from

Your Donation

Laser Hacking Smart Assistants

In the news lately I have seen some articles suggested to me by Google on the topic of Lasers being able to hack into IOT devices like Google Home, Amazon Alexa, iPad, and pretty much anything with a microphone.  I decided to look into this topic because I think that the security of IOT devices and mobile devices is a very important topic in computer security.  According to the articles that I have read, it has been verified that lasers can send silent voice commands to devices with microphones.  Some devices are more susceptible than others when it comes to the range that a laser can actually work from.  In this document I will go into some detail about how a laser can send these silent voice commands, some statistics on the lasers effect on different devices, and some possible remedies to the hack.

         All of the devices have a type of microphone called MEMS (micro- electro-mechanical systems) microphone.  A gap was found between the physics and specifications of this type of microphone that allows light to be recognized as sound. By modulating the amplitude of the laser light, sound can be injected into the microphone. (Takeshi Sugawara, 2019)  At first I wondered how it is even possible that a laser beam consisting of light could inject voice commands into a device with a microphone.  Evidently when the laser is aimed at a microphone with the intensity at a precise frequency, the light would perturb the microphones membrane at that same frequency producing the actual digital signal through the microphone to be received and translated by the device it was sent to.  This was tested on many devices with microphones and everyone was susceptible to the laser.  The discovery of the lasers ability to manipulate a microphones membrane to produce electrical signals to be processed by the device was made by a cyber security researcher named Takeshi Sugawara.  He brought the discovery to the attention of a professor at the University of Michigan and they have been experimenting with it since. (Greenberg, 2019)

         Some of the devices that the hack was tested on by the researchers were Amazon Echo, Apple Home Pod, iPhone XR, Google Pixel 2, Samsung Galaxy S9, Facebook Portal Mini, etc. (Iyer, 2019)  Some devices were susceptible from up to 360 feet like Siri and other AI assistants.  The devices are even susceptible through windows.  Mobile phones were much more difficult to hack into with the lasers, but it was still possible with the range for the iPhone being about 33 feet and Android phones range being around 16 feet. All of these were done with a 60-milliwatt laser.  The researchers of the laser hack also tested the devices with a 5-milliwatt laser which is the equivalent of a cheap laser pointer that anyone can get.  From 361 feet away with the 5-milliwatt laser, most of the researcher’s tests failed except for Google Home and a first generation Echo Plus. (Greenberg, 2019)

         As for problems that may arise because of this newfound hack, I do not think that it is something that people should be causing pandemonium over.  This laser hack is very stealthy because the lasers are silent while they produce physical voice commands.  Google, Apple, and some other device manufacturers say that they are looking into the research closely.  Some day there could be a fix for the problem by making two microphones so the laser cannot penetrate both at the same time.  Another fix for the problem could be a password that only the users of the device are aware of.  With the password option it would be possible for sensitive commands like purchasing items to only be executed when given the password.  More remedies like placing your assistants away from the window were suggested since the laser hack can be done through a window, potentially letting the hacker access to unlocking your door or garage.  I guess as long as the microphone of your assistant is not visible from a window then it should be fine.

         It seems like it is a lot of work to be able to actually set up and execute a laser hack on any device.  I do not think that many people out there will be utilizing this hack just because of the complexity of setting it up.  Turning the voice command into a light signal seem very complicated to be able to do.  Luckily the hack was discovered by cyber security professional researchers and they are figuring out all of the details about it so that it cannot be used in a malicious way.  They disclosed all of their research so Google, Apple and other major manufacturers of the latest IOT devices can consider preventing these security vulnerabilities.

         To conclude I would like to mention that I think this hack is a very sophisticated one.  It is amazing that all of the IOT device designers and engineers totally overlooked this hacking ability.  IOT device makers will have to really rethink their designs and apply preventive measures for this security vulnerability.  It is not just one company that is making these devices that are susceptible to this laser hack security vulnerability, it is all of them.  Be it teamwork or whatever measures necessary, these companies need to put their heads together and really work out the problem at hand.

Works Cited

Greenberg, A. (2019, November 4). Retrieved from

Iyer, K. (2019, November 7). Retrieved from

Takeshi Sugawara, B. C. (2019, November 4). Light Commands: Laser-Based Audio Injection Attacks on Voice-Controllable Systems*. Retrieved from

Your Donation

Agents and Environments in AI

Agents and Environments play a big part in Artificial Intelligence and in the post I am just going to lay out the basics of what Agents and Environments are made up of.

An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. You can think of an agent as being like a robotic player in a chess game. Its sensors are the ability to see the other players moves in the game. The environment is the game of chess, the board, the other player, and all of the pieces. The actuators of the chess game agent could be a robotic arm or in software the ability to make or making moves. There are many different examples of agents and environments of artificial intelligence in the world today, for example the self driving car, the car is the agent and the world is the environment.
A rational agent could be seen as an agent that tries its best to make the right decision.

The definition of a rational agent is:

For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and the agents built-in knowledge.

The Performance Measure is an objective criterion for success of an agents behavior.

The performance measure embodies criterion for success and is generally defined in terms of desired effect on the environment (not on actions of agent)

When specifying a task environment we use what is called PEAS.  The task environment must be defined to design a rational agent.

PEAS: Performance measure, Environment, Actuators, Sensors

Performance Measure: a function the agent is maximizing (or minimizing)

Environment: a formal representation for world states

Actuators: actions that change the state according to a transition model

Sensors: observations that allow the agent to infer the world state

When thinking about the environment there are many different types of environments.  It is good to know what type of environment your agent will be interacting with and the types can tell you the difficulty of defining your agent altogether.

Environment Types:

Fully observable vs. Partially observable

Do the agents sensors give it access to the complete state of the environment?  For any given world state, are the values of all the variables known to the agent?

Deterministic vs. Stochastic

Is the next state of the environment completely determined by the current state and the agents action. Strategic: The environment is deterministic except for the actions of other agents.

Episodic vs. Sequential

Is the agents experience divided into unconnected single decisions/actions, or is it a coherent sequence of observations and actions in which the world evolves according to the transition model?

Static vs. Dynamic

Is the world changing while the agent is thinking?

Semi-dynamic: the environment does not change with the passage of time, but the agents performance score does.

Discrete vs. Continuous

Does the environment provide a fixed number of distinct percepts, actions, and environment states?

Are the values of the state variables discrete or continuous?

Time can also evolve in a discrete or continuous fashion

Single Agent vs. Multi Agent

Is an agent operating by itself in the environment?

Known vs. Unknown

Are the rules of the environment (transition model and rewards associated with states) known to the agent?

With the types of environments laid out they can be easy or hard:

Easy: Fully Observable, Deterministic, Episodic, Static, Discrete, Single Agent

Hard: Partially Observable, Stochastic, Sequential, Dynamic, Continuous, Multi-Agent

The environment type largely determines the agent design.

The Structure of Agents:

There are four basic types of agents, here they are in order of increasing generality:

  1.  Simple Reflex Agents
  2. Reflex Agents with State
  3. Goal-based Agents
  4.  Utility-based Agents

Each kind of agent program combines particular components in particular ways to generate actions.

Simple Reflexive Agent handles the simplest kind of world.  This agent embodies a set of condition-action rules.  Basically works with If perception then action.  The agent simply takes in a percept, determines which action could be applied, and does that action.  The action is dependent on the current precept only.  This type of agent only works in a fully observable environment.

A Model-Based Reflex Agent works so when it gets a precept it updates the state, chooses a rule to apply, and then schedules the action associated with the chosen rule.

Goal-Based Agent is like a model based agent but it has goals so it will think about the state that it is in and then depending on the goals that it has it will take an action based on reaching its goals.

A Utility-Based Agent is the same as a goal based agent but it evaluates how performant the action it will perform to achieve its goal will be.  In other words how happy will the agent be in the state that would come if the agent made an action.

Finally there is Learning Agents it says above that there are four agent types but a learning agent is a special kind of agent.  One part of the learning agent is a utility-based agent and it is connected to a critic, a learning element, and a problem generator.  These three other parts make the learning agent able to tackle problems that are very hard.  The critic of a learning agent is just what it sounds like. It criticizes the agents actions with some kind of score so the agent knows the difference between good actions and bad actions.  The problem generator is used by the learning element to maybe introduce a small measure of error because if the agent always does the highest critic graded actions then the agent may be missing a more optimal solution because they have not tried something that should be unlikely but was better.

I hope you liked this post.  I am going to continue doing more Artificial Intelligence posts if I get the time as I am very busy.  I hope you learned a bit about agents and environments in AI because making this post has helped me solidify some of this knowledge in my own mind.

%d bloggers like this: