Google, Twitter and TikTok have received legal notices from Australian regulatory authorities ordering them to submit information on their efforts to contain child exploitation material.
The notices were issued on Wednesday by Australia’s e-Safety Commissioner Julie Inman Grant. Google, Twitter and TikTok will have to answer questions about their handling of online child abuse and blackmail. Twitch and Discord have also been served the notices.
The social networking platforms have 35 days to explain themselves. In case of failure to respond, the companies will be slapped with fines of up to $700,000 a day.
“We’ve been asking a number of these platforms for literally years: what are you doing to proactively detect and remove child sexual abuse material?” said Inman Grant. “And we’ve gotten what I would describe, as, you know, not quite radical transparency.”
The move particularly brings to attention Twitter’s anti-exploitation policies following billionaire Elon Musk’s $44 billion takeover of the company in October 2022. Musk had pledged to put an end to child exploitation across Twitter, but his claims were shrouded by the massive layoffs that subsequently became the subject of intense media scrutiny and expert opinions.
The commissioner remarked Musk’s Twitter will now have an opportunity to reveal its efforts to counter online child abuse. Inman Grant expressed concern over the treatment of harmful material on Twitter with several safety teams, responsible for ensuring protection for children, already laid off.
“Back in November, Twitter boss Elon Musk tweeted that addressing child exploitation was priority No 1 but we have not seen detail on how Twitter is delivering on that commitment,” she said. “We’ve also seen extensive job cuts to key trust and safety personnel across the company – the very people whose job it is to protect children – and we want to know how Twitter will tackle this problem going forward.”
The platforms will have to explain their detection mechanisms for child exploitation material, including during live streams. They must also clarify how algorithms could amplify the reach of illegal material as well as their responses to extortion attempts against children.
“The creation, dissemination and viewing of online child sexual abuse inflicts incalculable trauma and ruins lives. It is also illegal,” the commissioner said. “It is vital that tech companies take all the steps they reasonably can to remove this material from their platforms and services.”
In response to the notice, Samantha Yorke, Google’s senior manager for government affairs and public policy, said that child sexual abuse has no place on Google’s platforms.
“We utilise a range of industry-standard scanning techniques including hash-matching technology and artificial intelligence to identify and remove child sexual abuse material that has been uploaded to our services,” said Yorke.
TikTok stated the platform has “zero-tolerance approach to predatory behaviour and the dissemination of child sexual abuse material”.
“We have more than 40,000 safety professionals around the world who develop and enforce our policies, and build processes and technologies to detect, remove or restrict violative content at scale.”
Discord, on the other hand, confirmed that the company would respond to the e-safety commissioner’s demand.
“We have zero tolerance for content that threatens child safety online, and firmly believe this type of content does not have a place on our platform or anywhere in society,” a spokesperson said. “This is an area of critical importance for all of us at Discord, and we share the office’s commitment to creating a safe and positive experience online.”
In August 2022, Australia’s e-safety commission issued similar notices to Apple, Meta and Microsoft.