Reddit has rolled out a set of new content moderation and analysis tools.
The suite includes a ‘post control’ function that allows users to check whether their content may conflict with a community's rules before publishing it.
The feature, currently being tested on iOs and Android, aims to help users avoid accidentally breaking the rules and having their posts removed.
The platform also offers a ‘post recovery’ prompt, available on desktop and Reddit apps, which provides an alternative subreddit where content already removed due to community rules can be republished.
Posting requirements, such as account age or Karma, a reflection of the upvotes and downvotes received on posts and comments that have been made, will also automatically appear on desktop and Reddit apps for users, with the aim of avoiding confusion and increasing the number of successfully published posts.
The platform also suggests to users which communities are best suited to the content they wish to post, aiming to help users to find a matching audience.
The suite includes an analytics tool that informs users about the performance of their posts, including views, shares and more, to help them improve their future posts.
Reddit’s move comes as some social media platforms backtrack on their moderation schemes, with Meta founder Mark Zuckerberg announcing earlier this year that the company will replace its fact-checking process with a community notes model similar to that of X.
Over the past year, Reddit has improved its technology with new search tools to increase engagement and facilitate user retention.
In December 2024, an AI-powered search engine called Reddit Answers was introduced on the platform.
The tool is designed to provide users with curated summaries of discussions, allowing audiences to ask questions and receive AI-generated answers, along with links to related posts and communities.
Earlier this week, The Information Commissioner’s Office (ICO) launched three investigations to explore how TikTok, Reddit and Imgur protect the privacy of their child users in the UK.
The regulator said that it has concerns about social media and video sharing platforms using data generated by children's online activity in their recommender systems, which could lead to young people being served inappropriate or harmful content.
Recent Stories