UK set to impose bad internet law

UK set to impose bad internet law

The UK’s ambition and the controversial plan to regulate the Internet began with scribbles on the back of a packet of Brie and cranberry sandwiches from Pret a Manger. These notes, from discussions between academics Lorna Woods and William Perrin on how to hold tech companies accountable for online harm, became an influential white paper in 2019. This in turn became the basis for a project of law called the Online Safety Bill, an ambitious attempt. making the UK “the safest place in the world to be online”, by regulating how platforms should deal with harmful content, including images of child sexual abuse, cyberbullying and misinformation.

Since then, Britain has seen three prime ministers (and one lettuce), four digital ministers, a pandemic and a difficult exit from the European Union. Successive iterations of the ruling Conservative government have expanded the bill from Woods and Perrin’s paper, transforming it from a genuine attempt to hold tech platforms accountable for hosting harmful content, into a reflection of the Britain’s political dysfunction after Brexit.

The current government is widely expected to be voted out of power next year, but the bill returns to the House of Commons today, where MPs will have their last chance to debate its contents. “It’s very different from the sandwich packet, not least because there’s no trace of brie on it,” says Woods, a law professor at the University of Essex. More importantly, the different conservative administrations have each left their own mark. “I think it may have added to the Baroque ornamentation,” Woods says.

Many others are much less measured in their criticism. The bill as it stands today stretches to more than 260 pages, reflecting how ministers and MPs have focused on their own concerns, from cancel culture to security and beyond. by immigration. Many of the original provisions on disinformation have been removed or watered down. Additions to the bill include a controversial requirement that messaging platforms scan content for images of child sexual abuse, which tech companies and privacy advocates say cannot be achieved only by weakening end-to-end encryption.

Major platforms including WhatsApp and Signal have threatened to pull out of the UK if the law is passed. They are probably not bluffing and the bill will probably pass.

Previous versions of the bill took a relatively considered approach to tackling dangerous content online. In addition to provisions on how to prevent the most clearly illegal and harmful content, such as child pornography (CSAM), it also recognizes that sometimes legal content can be harmful because of the way it is amplified or targeted. For example, it may not be illegal to say that vaccines don’t work, but in the context of a deadly pandemic, this message could become very dangerous if it were shared widely and then continuously disseminated by platform algorithms to people likely to believe them. . The bill originally sought to prevent or limit offline harm from this “legal but harmful” content, not necessarily by banning such content, but rather by limiting how it ends up in users’ feeds or to whom it can be presented. For example, algorithms may need to be changed to prevent them from recommending posts promoting suicide to people in distress, or posts about extreme weight loss to young users.

Sipina convicted of involvement in $180 million Courtenay House fraud

Australia seeks to encourage women in cyberspace with second annual Quad Cyber ​​Challenge

Australia seeks to encourage women in cyberspace with second annual Quad Cyber ​​Challenge

Leave a Reply

Your email address will not be published. Required fields are marked *