WhatsApp can read some messages despite E2E encryption

WhatsApp can read the conversations of its users despite end-to-end encryption, albeit in a very limited way and in extremely special circumstances: a report by ProPublica, a non-profit investigative newspaper, has shed light on a practice that the platform Facebook-owned messaging has been leading for some time, but that’s not much-hyped.

It’s all about the Report feature, which is found by pressing the three vertical dots menu at the top right in a chat. It is possible to report a message to WhatsApp, for example, because it is considered spam or hate speech or child pornography content and so on. WhatsApp then receives the message and the four antecedents, to allow the moderators to evaluate the context. Note that the four messages can be of any type: text only, video, images and other special content.

The messages are sent in the clear, in the sense of not being encrypted. They are first processed by an automatic AI system, which has the task of skimming the queue by comparing them for example with databases of contents already known to be prohibited, then the ball passes to the team of human moderators.

It is unclear how many there are in all, but there are at least a thousand people – all employees of Accenture, a huge contractor that works with many of the largest companies in the world. On average, each moderator has to rate around 600 referrals per eight-hour shift – meaning you have less than a minute each.

In addition to messages, however, the reporting tool sends WhatsApp a large number of other information (metadata) of all those who are in the chat (also applies to groups), including:

  • Names
  • Profile pictures
  • Phone numbers
  • Status message
  • The battery level of the device
  • Language and time zone
  • IP address
  • Unique identification code of the device
  • Network / Wi-Fi signal strength
  • Device operating system
  • List of electronic devices
  • Details on any Facebook and Instagram accounts connected to the same profile
  • The last time the app was used
  • Any previous violations

Moderators can choose to do nothing, ban the account directly or keep it active but under control. There are different categories of prohibited content, both in the private sector and in relation to companies; for example child pornography, credible terrorist threat, spam, disinformation, sexual blackmail, but also violations of laws relating to trade for companies.

So far, it’s all pretty standard: virtually all online platforms have moderation systems that work virtually identical to what is described here. The point is that, unlike the others, WhatsApp does not publicly explain in detail how the process works, nor does it publish periodic transparency reports.

The beginning of hiring external workers for moderation dates back to around 2019, when the company was fined $5 billion for the acquisition of WhatsApp and for deceiving users on the privacy issue, and when Mark Zuckerberg announced the change of course of the entire group in the name of privacy.

Leave a Comment