09/17/2021 / By Mary Villareal
In March 2019, Facebook CEO Mark Zuckerberg unveiled a new “privacy-focused vision,” citing its global messaging service, WhatsApp as a model. He said that he hoped to bring about a future that allows people to be confident that what they say to others stays secure.
Zuckerberg’s vision centered on WhatsApp’s signature feature: End-to-end encryption that converts all messages into an unreadable format, unlocked only when they reach their intended recipients.
He said that WhatsApp messages are so secure that nobody else can read them, not even the company itself.
However, these assurances are not necessarily true.
WhatsApp has over 1,000 contract workers filling its office buildings in Texas, Dublin and Singapore where they examine millions of users’ content. These workers use Facebook software to sift through private messages, images and videos that have been reported by users as improper. Then, these messages are screened by the company’s own artificial intelligence systems.
The contractors then pass judgment on whatever it is that passes through their screen in less than a minute. These claims can be anything from fraud or spam to child pornography or terrorist plotting.
Policing users while assuring them of their privacy seems contradictory — and it makes for an awkward mission at the company.
Director of Communications Carl Woog acknowledged the teams of contractors in various locations who review WhatsApp messages to remove “the worst” abusers. He also said that he does not consider this work to be content moderation. (Related: Millions of WhatsApp users moving to rival services over data privacy concerns.)
The company said that WhatsApp is a lifeline for millions around the world, and the decisions made around how it is built are focused on the privacy of its users while keeping a high degree of reliability to prevent abuse.
In a statement, a WhatsApp spokesperson said, “WhatsApp provides a way for people to report spam or abuse, which includes sharing the most recent messages in a chat. This feature is important for preventing the worst abuse on the internet. We strongly disagree with the notion that accepting reports a user chooses to send us is incompatible with end-to-end encryption.”
WhatsApp says that how it moderates content is noticeably different from Facebook or Instagram, neither of which is encrypted. The social media networks release quarterly transparency reports that detail how many actions it has taken for various categories of abusive content.
An army of content reviewers, however, compromise the privacy of WhatsApp users. This means that the app is far less private than its two billion users are likely to expect or understand.
An investigation drawing on data from interviews with former and current employees and contractors of WhatsApp revealed that since purchasing WhatsApp in 2014, Facebook has undermined its security assurances. A complaint details WhatsApp’s extensive use of outside contractors, artificial intelligence systems and account information to examine user messages, images and videos.
Facebook Inc. also downplayed the sheer amount of data it collects from users, what it does with it and how much it shares with authorities. WhatsApp shares metadata or unencrypted records that can reveal a lot about a user’s activity.
Rival companies such as Signal intentionally gather less metadata to avoid invasion of their users’ privacy. Thus, they share far less information with law enforcement.
Like other social media and communications platforms, WhatsApp is torn between users who expect privacy and law enforcement entities that ask them to turn over information that could help combat crime and online abuses.
However, the company asserted that this is not a dilemma at all. Will Cathcart, head of WhatsApp said in an interview, “I think we absolutely can have security and safety for people through end-to-end encryption and work with law enforcement to solve crimes.”
WhatsApp’s aggressive business plan is also focusing on charging companies for an array of services, such as letting users make payments through the app and managing customer service chats. These offer convenience, but allow fewer privacy protections, leading to a confusing two-tiered privacy system within the app where protections of end-to-end encryptions are compromised when users employ the service to communicate with these businesses.
Sources include:
Tagged Under: Big Tech, deception, end-to-end encryption, Facebook, Glitch, mark zuckerberg, online abuse, privacy watch, Social media, surveillance, tech giants, technocrats, traitors, WhatsApp
COPYRIGHT © 2017 COMPUTING NEWS