Facebook and Instagram owner Meta has built an AI system designed to tackle harmful content online.
The social media giant claims that its Few-Shot Learner (FSL) technology can adapt to take action on new or evolving types of harmful content within weeks instead of months.
The new system uses “few-shot learning,” which means it starts with a general understanding of a topic and then uses much fewer labelled examples to learn new tasks.
The announcement comes a day after the company attended a US senate hearing into the impact of Instagram on young people.
On Tuesday Instagram said it was launching a new set of tools and features to keep teenagers safe on the platform.
FSL can be used in more than 100 languages and learns from different kinds of data, such as images and text.
Meta said that the technology will help augment its existing methods of addressing harmful content.
The AI system works across three different scenarios, each of which require varying levels of labelled examples. Zero-shot requires policy descriptions with no examples, few-shot with demonstration needs policy descriptions with a small set of examples, and with low-shot with fine-tuning, machine learning developers can fine-tune the FSL base model with a low number of training examples.
The company has trialled FSL on a number of recent events, including on a task to identify content that shares misleading or sensationalised information discouraging Covid-19 vaccinations.
Another task saw the AI improve an existing classifier that flags content that comes close to inciting violence, for example: “Does that guy need all of his teeth?”.
The company said that the traditional approach may have missed these types of inflammatory posts because there aren’t many labelled examples that use references to teeth to imply violence.
“We’ve also seen that, in combination with existing classifiers along with efforts to reduce harmful content, ongoing improvements in our technology and changes we made to reduce problematic content in News Feed, FSL has helped reduce the prevalence of other harmful content like hate speech,” wrote the company on Thursday. "We believe that FSL can, over time, enhance the performance of all of our integrity AI systems by letting them leverage a single, shared knowledge base and backbone to deal with many different types of violations.
“There’s a lot more work to be done, but these early production results are an important milestone that signals a shift toward more intelligent, generalized AI systems.”
Recent Stories