Meta launches AI knowledge tool Sphere for open web content

Facebook parent company Meta announced today a new tool called Sphere, built around mining the vast repository of information on the open web to provide the knowledge base for artificial intelligence and other systems to work.

Sphere’s first user is Wikipedia, which is using it to automatically scan entries and identify when citations in its entries are strongly or unsupported. The research team has open-sourced Sphere, which is currently based on 134 million public web pages.

The idea of ​​using Sphere for Wikipedia is simple: the online encyclopedia has 6.5 million entries, with an average increase of about 17,000 articles per month. The wiki concept behind this actually means that adding and editing content is crowdsourced, and while there is a team of editors overseeing it, this is a daunting task that is increasing day by day, not just because of its size but also because of its mandate.

Meanwhile, the Wikimedia Foundation, which oversees Wikipedia, has been weighing new ways to use all this data. Last month, it announced an enterprise tier and its first two commercial customers, Google and the Internet Archive, which use Wikipedia-based data for their own commercial interests, will now have a broader and more formal service The protocol revolves around it.

In the case of Meta, the company continues to be weighed down by bad public perception in part because it has been accused of allowing misinformation and toxic ideas to spread freely, so launching something like Sphere feels a bit like a PR campaign for Meta, if it works, and it can be a useful tool to show that someone in the organization is trying to work in good faith.

Today’s announcement about Meta’s partnership with Wikipedia doesn’t mention Wikimedia Enterprises, but in general, adding more tools to Wikipedia to ensure the content it has is verified and accurate would be a potential enterprise service Things customers want to know when considering paying for the service.

It’s unclear what the deal was to make Wikipedia a paying customer for Meta. Meta did point out, however, that to train the Sphere model, it created a new dataset (WAFER) comprising 4 million Wikipedia citations, much more complex than previous datasets used for this type of research. And just five days ago, Meta announced that Wikipedia editors were also using a new AI-based language-translation tool is built, so there’s clearly a connection.

If you like our news and you want to be the first to get notifications of the latest news, then follow us on Twitter and Facebook page and join our Telegram channel. Also, you can follow us on Google News for regular updates.

Leave a Comment