Smart Home Devices and Privacy Risk

Adam Jacobson


October 1, 2019

FF Smart DevicesInternet-connected products have become enmeshed in many aspects of daily life, both at home and in the workplace. According to the consumer trends research firm Park Associates, 36% of homes with a broadband internet connection have a smart speaker, and 8% have a video doorbell, with another quarter of respondents saying that they plan to buy a video doorbell in the future.

While “smart home” or internet of things (IoT) devices have become more prevalent and may make everyday or business tasks more convenient, they also diminish consumers’ privacy and introduce serious risks, for both users and device developers and manufacturers.

With connected devices, the connection goes two ways. When consumers bring internet-connected devices into their homes or businesses, the companies behind the devices gain access to a wealth of information about those consumers. Companies often do not explicitly disclose all of these practices to users, quietly harvesting, analyzing and sometimes selling their data to third parties like advertisers. As the devices are used in more settings every day from homes to offices, this data can include everything from recordings of deeply personal interactions to proprietary business information. In the process, developers risk running afoul of new privacy regulations and possibly losing consumer trust when the public discovers the extent of their activities. This could have serious ramifications, including millions (or even billions) of dollars in fines and reputation damage that can crater revenue.

Of the reasons Park Associates survey respondents cited for not buying such devices, only 25% chose privacy and security concerns, but that may soon change. A recent wave of news stories revealed that employees and contractors associated with many of these products have been accessing, analyzing and storing customer recordings, sometimes without even their tacit consent, including:

After revelations that Amazon had thousands of human beings listening to and transcribing audio from its home devices, the company admitted that it keeps some audio and transcripts indefinitely, sometimes even after customers try to delete them manually. Not only that, but Amazon contractors reportedly shared particularly amusing or disturbing clips with each other and even saw user locations paired with those recordings. The company’s privacy documentation does not explicitly disclose that human beings are listening to user audio.

Researchers also found that the company’s Echo Dot Kids Edition could collect recordings of a child speaking about her private health information and address, and when parents attempted to delete the information (as Amazon said they could), the device still retained it. And in January, The Intercept revealed that Amazon’s video doorbell company Ring provided a research and development team based in Ukraine with unencrypted access to every video from all Ring cameras worldwide for analysis and annotation, partially to strengthen its facial recognition software.

The Guardian reported in July that Apple used external contractors to review recordings from its voice-activated assistant, Siri, which can be easily activated by accident by saying words that sound similar to its name. Apple contractors had access to Siri recordings, including audio of “confidential medical information, drug deals, and recordings of couples having sex.” A whistleblower told reporters that the audio is linked to user information including location, contact details and app usage. Similar to Amazon, Apple does not clearly state that contractors may be listening to users’ audio.

The social media giant hired hundreds of contractors to transcribe audio from the Voice to Text feature of its Messenger service. Additionally, the company said that if one person consented to transcription, both sides of the conversation would be transcribed. While Voice to Text is an optional feature that users may turn off, it is switched on by default and Facebook did not disclose that human beings would have access to users’ recordings, claiming only that “Voice to Text uses machine learning.”

Like Apple’s Siri, Google’s home assistant is often triggered by accident, not just by saying “Okay Google” or “Hey Google,” as advertised. Belgium-based VRT News revealed in July that human beings were processing the audio Google recorded (accidentally or intentionally). While the company claimed it removed identifying information from the recordings, a whistleblower provided VRT reporters with audio clips that clearly contained users’ addresses as well as other personal information.

In August, Motherboard reported that contractors were transcribing audio from online communication platform Skype’s Translator service, which provides nearly instant translations during calls. While Motherboard’s reporting focused on the types of personal information contractors could hear, Skype is used in a variety of settings, including for business calls that may contain proprietary or sensitive information. Microsoft also had contractors transcribing user audio from its virtual assistant, Cortana, as well as the video game console Xbox, which has some voice command functions connected to Cortana.

Many of these practices are technically within the bounds of the devices’ user agreements, and the companies claim they only used human review to improve the devices’ ability to understand speech. The companies have also repeatedly emphasized the low number of conversations reviewed by human beings. Yet these disclosures pose serious questions about the limits of privacy for connected households and businesses and the risks in using these devices.

Since the news broke, Amazon has added an opt-out option. Apple said it has paused human review, that it will let customers choose to opt out, and that only Apple employees will have access to recordings. Similarly, Google added opt-out options and paused human review, but only in Europe. Facebook also said that it has paused human review of its Voice to Text feature. Microsoft updated its privacy policy to explicitly warn of human review, and now offers a website where users can delete captured audio.

Nevertheless, regulators in Europe and the United States are taking notice. The European Union’s sweeping General Data Protection Regulation (GDPR) maintains stringent rules for data privacy in the EU, and some of these activities may be beyond its boundaries. Ireland’s Data Protection Commission, which leads the EU regulation of Facebook, has been investigating the company on other matters, and is now seeking information about whether Facebook’s handling of user recordings is GDPR compliant. Similarly, Luxembourg’s data privacy regulator has asked Amazon to provide information about Alexa, but has not commented further on its investigation.

In the United States, meanwhile, Senators Ed Markey (D-MA) and Josh Hawley (R-MO) questioned Facebook’s collecting and sharing of user recordings. Hawley inquired on Twitter in August whether the practice violates Facebook’s recent FTC settlement, which imposed stricter privacy guidelines. Representative Seth Moulton (D-MA) introduced the “Automatic Listening Exploitation Act” in July, which would fine companies up to $40,000 whenever a device (including video doorbells) stores or makes a recording of a user or transfers that recording to a third party without the user’s explicit consent. It would force companies to allow users to delete any transcript or recording from their device and the company’s files permanently.

A bipartisan group of senators also wrote to the Federal Trade Commission (FTC) asking it to investigate whether Amazon’s Echo Dot Kids Edition violates the Children’s Online Privacy Protection Act (COPPA) by not complying with its parental consent provision, which requires companies to specify what data is being collected, how the company uses it and whether/how it is shared with third parties. Additionally, the senators noted that the device may violate COPPA by not allowing parents to fully delete their children’s private information.

FTC investigations of COPPA violations have hit multiple tech companies this year: In February, the FTC fined Chinese app company (now known as TikTok) $5.7 million for violating COPPA by collecting children’s information without parents’ consent. In September, the FTC also fined Google and YouTube a record $170 million for collecting children’s data to profit by selling the data for targeted ads.

For consumers, one way to avoid any potential privacy invasion might be simply refusing to use “smart home” internet-connected devices and digital assistants. But as the devices become more prevalent, opting out may not be a viable option for long, especially as manufacturers develop new applications for business use. As reported by the Wall Street Journal, “Amazon’s Alexa Smart Properties team, a little known part of its Alexa division, is working on partnerships with homebuilders, property managers and hoteliers to push millions of Alexa smart speakers into domiciles all across the U.S.” The partnerships makes sense: Amazon gets new users and their data, and property managers get discounted hardware (amenities to entice or keep potential buyers/renters) and access to information that allows them to analyze and predict their residents’ actions, such as signs of whether they are likely to renew their lease. Amazon has plans to make its devices part of everything from stadiums and hotels to hospitals and retirement homes. Meanwhile, Google is making a similar push into the real estate market.

As these devices proliferate, manufacturers should disclose and offer clear ways to opt out of features that may violate privacy to protect themselves from regulation, fines, lawsuits and bad publicity. And consumers—including businesses that increasingly use these devices and programs—will have to weigh the benefits of possibly sacrificing some degree of privacy for convenience.

Adam Jacobson is associate editor of Risk Management.