Amazon's Cloud Cam is at the centre of the latest privacy scare that has hit the digital world. Turns out, Cloud Cam, the global e-commerce giant's app-controlled, Alexa-compatible indoor security devices, is not totally devoid of human intervention. According to a Bloomberg report, scores of Amazon employees based in India and Romania would be reviewing select video clips captured by Cloud Cam for the purpose of improving its accuracy.
Citing five people who have worked on the programme or have direct knowledge of it, the report alleged that the video snippets are used to train the AI algorithms to do a better job distinguishing between a real threat and a false alarm. Bloomberg reported that at one point, these human workers were responsible for reviewing and annotating roughly 150 security snippets of up to 30 seconds in length each day that they worked.
The report raises concern amid a slew of similar privacy scares coming from tech giants including Google and Apple recently. However, Amazon has stated that the clips are submitted either through employee trials or customer feedback submissions for improving the service. “Using the ‘feedback’ option in the Cloud Cam app, customers are able to share a specific clip with Amazon to improve the service,” tech portal Gizmodo reported citing an Amazon spokesperson.
“When a customer chooses to share a clip, it may get annotated and used for supervised learning to improve the accuracy of Cloud Cam’s computer vision systems. For example, supervised learning helps Cloud Cam better distinguish different types of motion so we can provide more accurate alerts to customers... Every clip surfaced to a Cloud Cam customer has the “Send Feedback” button at the bottom (screenshot below). Customers typically send clips for feedback if there was something wrong with it, i.e. if they got a motion detection alert but the clip doesn’t contain any motion, or the resolution of the clip isn’t satisfactory," the spokesperson said. The company insisted that all the clips are provided voluntarily.
However, the issue is that nowhere it is written explicitly that humans would be viewing and training the algorithms behind the motion detection software. What's more alarming is that, according to two of the people that Bloomberg spoke to, the teams have picked up activities that homeowners are unlikely to want shared, including rare instances of people having sex. In more trouble, one of the sources even said that the video clips might even have been shared with outsiders, despite the fact that reviews happen in a restricted area that prohibits phones.
also read
- Amazon Haul heading to India? Low-cost online retail store may fill the Temu-Shein market gap but increase textile waste
- Amazon flexes its OTT muscle by acquiring MX Player
- Amazon Great Indian Festival: iPhone 16 series, Samsung S23 Ultra 5G and more deals on smartphones
- How is Rufus different from Alexa? Here's what the Amazon AI chatbot says
In short, while customers would be sharing videos with the company for troubleshooting purposes, they are not necessarily aware of what happens with that clip after doing so.
This is not the first time, AI algorithm and IoT-based devices have been accused of human interferance. Earlier reports had cited whistleblower's revelations accusing Apple's Siri, Google and Amazon's Alexa to have humans listening to voice assistant recordings. In separate instances, both Google and Amazon had admitted that its contractors were listening to recordings of conversations between humans and their voice assistants—Google Assistant and Alexa.
In August, Apple had to suspend its global programme where it analysed recordings from users interacting with its voice assistant Siri due to privacy concerns.