Britain Passes Sweeping New Online Safety Law

by Pelican Press
123 views 4 minutes read


Britain on Tuesday passed a sweeping law to regulate online content, introducing age-verification requirements for pornography sites and other rules to reduce hate speech, harassment and other illicit material.

The Online Safety Bill, which also applies to terrorist propaganda, online fraud and child safety, is one of the most far-reaching attempts by a Western democracy to regulate online speech. About 300 pages long, the new rules took more than five years to craft, setting off intense debates about how to balance free expression and privacy against barring harmful content, particularly targeted at children.

At one point, messaging services including WhatsApp and Signal threatened to abandon the British market altogether until provisions in the bill that were seen as weakening encryption standards were changed.

The British law goes further than efforts elsewhere to regulate online content, forcing companies to proactively screen for objectionable material and to judge whether it is illegal, rather than requiring them to act only after being alerted to illicit content, according to Graham Smith, a London lawyer focused on internet law.

It is part of a wave of rules in Europe aimed at ending an era of self-regulation in which tech companies set their own policies about what content could stay up or be taken down. The Digital Services Act, a European Union law, recently began taking effect and requires companies to more aggressively police their platforms for illicit material.

“The Online Safety Bill is a game-changing piece of legislation,” Michelle Donelan, the British secretary of technology, said in a statement. “This government is taking an enormous step forward in our mission to make the U.K. the safest place in the world to be online.”

British political figures have been under pressure to pass the new policy as concerns grew about the mental health effects of internet and social media use among young people. Families who attributed their children’s suicides to social media were among the most aggressive champions of the bill.

Under the new law, content aimed at children that promotes suicide, self-harm and eating disorders must be restricted. Pornography companies, social media platforms and other services will be required to introduce age-verification measures to prevent children from gaining access to pornography, a shift that some groups have said will harm the availability of information online and undercut privacy. The Wikimedia Foundation, the operator of Wikipedia, has said it will be unable to comply with the law and may be blocked as a result.

TikTok, YouTube, Facebook and Instagram will also be required to introduce features that allow users can choose to encounter less amounts of harmful content, such as eating disorders, self-harm, racism, misogyny or antisemitism.

“At its heart, the bill contains a simple idea: that providers should consider the foreseeable risks to which their services give rise and seek to mitigate — like many other industries already do,” said Lorna Woods, a professor of internet law at the University of Essex, who helped draft the law.

The bill has drawn criticism from tech firms, free speech activists and privacy groups who say it threatens freedom of expression because it will incentivize companies to take down content.

Questions remain about how the law will be enforced. That responsibility falls to Ofcom, the British regulator in charge of overseeing broadcast television and telecommunications, which now must outline rules for how it will police online safety.

Companies that do not comply will face fines of up to 18 million pounds, or about $22.3 million, a relatively small sum for tech giants that earn billions per quarter. Company executives could face criminal action for not providing information during Ofcom investigations, or if they do not comply with rules related to child safety and child sexual exploitation.



Source link

Leave a Comment

You may also like