Italy has approved a new set of rules in an effort to protect children on video-sharing platforms, AGCOM, the country’s communications watchdog, announced last week.
The regulations will target leading video platforms such as YouTube, which is owned by Alphabet; Meta Platforms’ Instagram; and ByteDance’s immensely popular short-video service, TikTok. The new law will come into effect from January 8, 2023.
The watchdog, in a statement (translated online) dated Thursday, December 7, stated the rules will require tech companies to implement stricter measures to counter videos deemed threatening to underage users. Content propagating religious and ethnic hatred and promoting racial and sexual attacks online will be subject to the new law.
The updated legislation will also cover various other forms of potentially harmful and illegal content, the regulator added.
The rules will empower AGCOM to take strict regulatory action against video platforms in case of violations and the law will apply to digital platforms based in other European Union (EU) countries as well. Before implementing these rules, however, the Italian regulator will have to consult with the national authorities of the country concerned.
The authority of a given member state will have seven days to ask the platform to take down the flagged content. If the authority’s action does not satisfy the Italian regulator, it will proceed with the takedown request with the platform itself.
The development arrives after the EU enforced the Digital Services Act (DSA), which imposes stricter regulations on tech companies to counter harmful content and places increased emphasis on child safety. Following the implementation of the DSA, regulatory actions against tech firms with regards to child protection on social media platforms have significantly accelerated.
The DSA demands more transparency around personalised ads, which are run based on the personal data collected by tech giants without the consent of children’s parents. The legislation also focuses on social media algorithms, which recommend content based on a user’s online activity, which is often tracked without the knowledge of consumers by social media companies.
Meta, in particular, has been under intense scrutiny for what regulators are calling its “failures” to protect minors from malicious actors, specifically predators targeting children and steering them into groups rife with material depicting child sexual abuse. TikTok, Snapchat, and YouTube, too, are at the centre of regulatory glare in the region under the DSA, violations of which could cost a company almost six per cent of its global yearly turnover.