It's getting harder for the government to secretly report your social media posts

It’s getting harder for the government to secretly report your social media posts

The ruling and law are the first significant disruptions to the comfort that existed between online platforms and government agencies and other organizations that patrol the web to quietly remove unfavorable comments. But although rights groups that promote free speech have applauded the new interventions, they also warn that IRUs and the moderation decisions they prompt will be largely allowed to continue without adequate controls and disclosures.

Shadow moderation

Internet SEO units first appeared around 2010 in the United Kingdom, as services such as Facebook and YouTube came under pressure from counterterrorism authorities to better manage content generated by violent Islamic extremists. Companies trying to build better relationships with governments have generally agreed to the requests and even designated IRUs as “trusted flaggers,” whose reports of inappropriate content would be reviewed more quickly than those of standard users.

The number and activity of IRUs have grown rapidly. Companies have also added civil society organizations as trusted flaggers. Authorities in countries like Germany and France used the tactic to crack down on far-right political extremism on social media in the late 2010s, and then health misinformation during the pandemic.

Reference units are not always formal or well-organized entities, and their missions vary, but a common process has become established: choose a topic to monitor, such as political disinformation or anti-Semitism, search for problematic content, then report it to companies via dedicated hotlines, physical letters, personal relationships with insiders, or “report this” buttons accessible to all users. Units can only report what appears to be criminal activity, but some report content that is legal but prohibited by a platform’s rules, such as nudity or bot accounts.

Most often, experts say, compliance by platforms is voluntary because requests are not legally binding; Users are usually not informed who reported their content. Rights groups have long expressed concern that IRUs effectively circumvent legal processes, trading speed and simplicity for transparency and checks on abuses of power, while pushing user reports to the bottom of the line.

Social media companies may feel significant pressure to act on the IRU’s demands because fighting them could lead to regulations that raise the costs of doing business, according to several experts and four former tech company policymakers who have dealt with requests. It is common for politicians and influential groups to request direct channels to raise concerns about content, and for platforms to provide them.

Power balances established offline are reflected in the programs. The Palestinian Authority, one of the small government groups at odds with Israel, “does not have the influence or relationships with Meta to run an effective IRU,” says Eric Sype of the Palestinian rights group 7amleh. Meta, TikTok and Twitter did not respond to requests for comment for this story and YouTube declined to comment.

Mykhailo Fedorov leads Ukraine's war against Russia like a start-up

Mykhailo Fedorov leads Ukraine’s war against Russia like a start-up

CrowdStrike Appoints Louis Tague as Vice President of Sales, ANZ

CrowdStrike Appoints Louis Tague as Vice President of Sales, ANZ

Leave a Reply

Your email address will not be published. Required fields are marked *