The government has said that the Online Safety Bill will no longer require technology companies to remove non-illegal harmful online content.
The amendment to the upcoming law will mean that social media platforms no longer need to remove racist, misogynistic, or antisemitic content, or posts that glorify eating disorders, as these do not meet the criminal threshold.
Instead, companies will be forced to offer adults tools to "help them avoid" this kind of harmful content, including human moderation, blocking content flagged by other users or sensitivity and warning screens.
While companies will still need to remove illegal online content, the new legislation will not define specific types of legal content that companies must address.
The Department for Digital, Culture, Media & Sport (DCMS) said that the move “removes any influence future governments could have on what private companies do about legal speech on their sites, or any risk that companies are motivated to take down legitimate posts to avoid sanctions”.
Legal but harmful measures will be replaced with new duties which will "explicitly prohibit them from removing or restricting user-generated content, or suspending or banning users, where this does not breach their terms of service or the law".
“I will bring a strengthened Online Safety Bill back to Parliament which will allow parents to see and act on the dangers sites pose to young people,” said digital secretary Michelle Donelan. “It is also freed from any threat that tech firms or future governments could use the laws as a licence to censor legitimate views.”
Recent Stories