Ireland’s media and internet regulator has finalised a new set of rules for tech companies to contain “harmful content” on social media platforms, according to an official statement.
The regulatory framework, titled the “Online Safety Code”, lays out binding rules that will apply to video-sharing platforms whose European Union (EU) headquarters are based in Ireland. The Code will mark a significant departure from “self-regulation”.
The rules will come into force next month, with the Irish watchdog Coimisiún na Meán supervising their implementation. They will be applicable to ByteDance’s TikTok, Microsoft’s YouTube, and Meta’s Instagram and Facebook.
“The code sets binding rules for video-sharing platforms to follow in order to reduce the harm they can cause to users,” says the watchdog’s online safety commissioner.
“We will work to make sure that people know their rights when they go online and we will hold the platforms to account and take action when platforms don’t live up to their obligations,” he adds.
The new rules place a specific focus on children, prohibiting “the uploading or sharing of harmful content on their services including cyberbullying, promoting self-harm or suicide and promoting eating”.
On other other hand, Jeremy Godfrey, executive chairperson of Coimisiún na Meán, says, “Our message to people is clear: if you come across something you think is illegal or against a platform’s own rules for what they allow, you should report it directly to the platform.”
Feeding disorders in addition to incitement to hatred or violence, terrorism, child sex abuse material (CSAM), racism and xenophobia will need containing, too.
The Online Safety Code reinforces the need for preventive measures, including age verification and parental controls, for sexually explicit and violent content.
The rules mandate the provision of parental oversight content that “may impair the physical, mental, or moral development of children under 16”.