“I and others who tried to reach out found ourselves at a dead end,” Benavidez says. “And when we reach out to those who are supposed to still be on Twitter, we just don’t get a response.”
Even when researchers are able to access Twitter, responses are slow, sometimes taking more than a day. Jesse Littlewood, vice president of campaigns at the nonprofit Common Cause, says he’s noticed that when his organization flags tweets that clearly violate Twitter’s policies, those posts are now less likely to be removed.
The volume of content that users and watchdogs may want to report to Twitter is likely to increase. Many employees and contractors laid off in recent weeks worked on teams such as trust and safety, policy, and civic integrity, all of which worked to prevent misinformation and hate speech from the platform.
Melissa Ingle was a senior data scientist on Twitter’s civic integrity team until she was laid off along with 4,400 other contractors on November 12. She wrote and monitored algorithms used to detect and remove political misinformation on Twitter – most recently, that meant the US elections. and Brazil. Of the 30 people on his team, only 10 remain, and many of the human content moderators, who review tweets and report those that violate Twitter policies, have also been fired. “Machine learning requires constant input and constant care,” she says. “We have to constantly update what we’re looking for because the political discourse is constantly changing. »
Although Ingle’s job didn’t involve interacting with outside activists or researchers, she says members of Twitter’s policy team did. Sometimes, information from external groups helped illuminate the terms or content that Ingle and his team were training algorithms to identify. She now worries that with so many staff and contractors laid off, there won’t be enough staff to ensure the software remains accurate.
“With the algorithm no longer updated and human moderators gone, there simply aren’t enough people to man the ship,” says Ingle. “My concern is that these filters are going to become more and more porous, and more and more things are going to get through as the algorithms become less and less precise over time. And there are no humans to catch the things that slip through the cracks.
A day after Musk took over Twitter, Ingle says, internal data showed that the number of abusive tweets reported by users increased by 50%. That initial surge has subsided somewhat, she says, but reports of abusive content have remained about 40% higher than the typical volume before the takeover.
Rebekah Tromble, director of the Institute for Data, Democracy and Policy at George Washington University, also expects to see Twitter’s defenses against banned content weaken. “Twitter has historically struggled to resolve this issue, but a number of talented teams have made real progress on these issues in recent months. These teams have now been wiped out.
These concerns are echoed by a former content moderator who was a contractor for Twitter until 2020. The contractor, speaking anonymously to avoid retaliation from his current employer, says that all former colleagues doing similar work with who he was in contact with were fired. He expects the platform to become a much less pleasant place. “It will be horrible,” he said. “I actively sought out the worst parts of Twitter – the most racist, horrible, degenerate parts of the platform. This is what will be amplified.