OpenAI has said that artificial intelligence (AI) tools can be effective in moderating online content posted on social media platforms.
The company claims that large language models, such as its programme ChatGPT, can speed up the process and reduce the “mental burden” on human moderators.
OpenAI said it was experimenting with ways to detect unknown risks, adding that as with all AI tools, the process of content moderation would require human oversight.
“Content moderation demands meticulous effort, sensitivity, a profound understanding of context, as well as quick adaptation to new use cases, making it both time consuming and challenging,” said OpenAI on its website. “Our large language models like GPT-4 can understand and generate natural language, making them applicable to content moderation. The models can make moderation judgments based on policy guidelines provided to them.”
The comments follow the news that almost 34,000 online grooming crimes against children have been recorded by UK police since 2017.
The NSPCC has called for MPs and tech firms to back the Online Safety Bill, which would force tech firms to assess their products for the risk of harm.
Earlier this week, an in-depth report discovered OpenAI could go bankrupt within the next 18 months. The company spends an estimated $700,000 per day to run ChatGPT.
While OpenAI is funded by investors including Microsoft, the company is unprofitable and announced annual losses of around $540 million in May.
Recent Stories