Twitter will use new data from reported tweets to detect platform issues earlier

According to the latest report, Twitter is trying to make some changes to the way users report tweets that they believe may violate Twitter’s rules. Like most social media companies, Twitter has long relied on user reports to flag tweets that may violate policy. But in its new vision for the system, these reports will provide the company with a richer picture of platform behavior, not just a way to evaluate individual events in isolation.

Currently, Twitter is testing the new system in a small test group in the United States but plans to release it to a wider platform starting in 2022. The system replaces the existing prompts that Twitter users see when selecting content on the reporting platform.

One of the biggest changes is that users will no longer be asked to explain which policy they believe a tweet violates-instead they will provide information, and Twitter will use an automated system to prompt a tweet that may violate the rules.

Users reporting tweets can accept this suggestion, or it can be said to be wrong-Twitter can further use this feedback to improve its reporting system. The company likens this to seeing a doctor and describing the difference between your symptoms and self-diagnosis.

Renna Al-Yassini, a senior user experience manager on the Twitter team, said: The report can be frustrating and complicated. We execute it in accordance with the terms of service defined by the Twitter rules. Most of the content people report belongs to the larger gray range does not meet the specific criteria for Twitter violations, but they still report the deep problems and high levels of anxiety they have experienced.

Join RealMi Central on Telegram, Facebook & Twitter

Twitter hopes to collect and analyze reports that fall into this gray range and use them to provide the company with a snapshot of problematic behavior on the platform. Ideally, Twitter can identify new trends when harassment, misinformation, hate speech, and other problem areas arise, rather than after they have rooted. The company declined to comment on whether these changes require the hiring of more human reviewers, and said it will combine manual review and automatic review to process information.

The company also hopes to improve the user’s reporting process, closed-loop and make the reporting process more meaningful, even if a tweet does not prompt enforcement action. The new system is designed to resolve some common complaints that the company has determined by studying how to make the platform more secure.

Fay Johnson, director of product management for Twitter’s health team, said: This helps us solve unknown problems… We also want to make sure that if new problems arise-those that we may not have rules yet- -There is a way for us to understand them.

Leave a Comment