In the past year people have spent about eight billion dollars on smart lightbulbs to conveniently illuminate their homes. Over the next year, that price point is estimated to jump to about 28 billion because more and more people are turning to smart devices around their houses for the convenience that they provide (Min, 2019). Smart lightbulbs seem simple when you think about them as far as functionality goes. You can turn your light on, turn your light off, and with the smart lightbulbs today you can even have them change brightness or color according to music or video that is playing in the house, blending in with multimedia. But little do the consumers know that there are flaws in the security layer of their smart lightbulbs. Some smart lightbulbs specifically the ones that change brightness and color according to multimedia playing along with it can let hackers infer the actual media playing along with the light, like audio or video. Some smart lights that have an infrared function, hackers have shown that a covert data exfiltration threat can be done with them (MAITI, 2019). In this document I will first go into detail about the video-audio inference threat, details about the covert data exfiltration threat, and conclude with a section on preventive measures that can be taken so the user does not fall victim to these threats.
The video and audio inference threat that is present in smart lightbulbs lets a malicious user know what song or video a user is playing along with the lightbulbs ability to change brightness according to the media that is playing. This is a big problem because there is a law called the US Video Privacy Protection Act to prevent getting a user’s media information like this because it can reveal personal interests and preferences (MAITI, 2019). While this threat is actually difficult to set up and exploit, it is still possible to do. The smart lightbulbs in examination of this threat change brightness and hue according to the different media that is playing in conjunction with them. It turns out that the audio waveform and the fluctuations in brightness in the smart lightbulb have similar graphs(MAITI, 2019) and with this information a malicious user, having a library of songs to compare the light fluctuation to, can infer the media type from. To achieve this inference, the malicious user needs a luminance meter and a library of media to reference. For the difference in luminance when the hue option is used in the smart lightbulb, an RGB sensor should be used. The researchers tested audio in intervals from 15 seconds to 120 seconds and as you would expect, the accuracy of inferring the media that the user is engaged in is greater the longer the observation. The same holds for video but the time intervals were from 60 seconds to 360 seconds (MAITI, 2019). Inferring a user’s audio and video usage is a really dangerous threat. Because there is a law protecting users’ privacy when it comes to media consumption, I think that this threat is potentially very dangerous.
The covert data exfiltration threat is present in smart lightbulbs because in theory, any light can transmit data. The research says that this threat is available on smart lights that do not have a hub connecting the lights or having a hub but without permission controls. Using this threat, a malicious user can obtain data from an unsuspecting users’ private network. The researchers tested obtaining data using the infrared light from a smart lightbulb sending strings and images through the network. Using the infrared light from the bulb the researchers were able to get binary data from the different power levels of the bulb.
Text at 15 meters
Original Text: A cup of sugar makes sweet fudge
Reconstructed Text: A buq pf!sugbr m`kes█suees hudfe
As you can see, this is a very dangerous threat that is present in these smart lightbulbs. Anyone can use this threat to obtain any sensitive information about the users over a private network.
These threats in smart lightbulbs are actually very difficult to utilize. Given the right tools and proximities, a malicious user may be able to use these threats to obtain personal information. First, I would like to note proximity. To prevent these threats from being executed in your home network, proximity is vital to the execution and extraction or inference of your data. Be careful who you let into your private network or into your home. If a malicious user is too far away from your devices the information obtained may be too degraded for malicious use by the time the malicious user gets the information. Both of these threats can be done through a window. I would say that curtains that do not let any light through them could be a good preventive measure taken against these threats. You should buy smart lightbulbs that connect to a hub with permission controls. The research says that lightbulbs connected to a hub with permission controls are not susceptible to the covert exfiltration threat.
Although smart lightbulbs are very convenient to the extent that you no longer have to get up and go to a switch to turn them off and on, and they provide some extra features like musical or video lighting, they are prone to security vulnerabilities. As I found out doing this research, even though light bulbs have very crude electronic circuitry and seem very simple, they evidently provide multiple access points for a user’s sensitive data to malicious users. I don’t know if there could be any better of a software engineering practice to prevent these types of threats, but there has to be some type of remedy. Security researchers are finding security holes just about as fast as the number of devices that are being released. You cannot even trust a lightbulb these days.
MAITI, A. (2019, September). Light Ears: Information Leakage via Smart Lights . Retrieved from sprite.utsa.edu: https://sprite.utsa.edu/publications/articles/maitiIMWUT19.pdf
Min, S. (2019, October 24). Are “smart” light bulbs a security risk? Retrieved from cbsnews.com: https://www.cbsnews.com/news/are-smart-light-bulbs-a-security-risk/