The same goes for AI systems that companies use to flag potentially harmful or abusive content. Platforms often use huge amounts of data to create internal tools that help them streamline this process, says Louis-Victor de Franssu, co-founder of trust and security platform Tremau. But many of these companies must rely on commercially available models to build their systems, which could lead to new problems.
“Some companies claim to sell AI, but in reality they bundle different models,” says Franssu. This means that a company can combine different machine learning models (for example, one that detects a user’s age and another that detects nudity to flag potential child sexual abuse material) into one service it offers to its customers.
And while that can make services cheaper, it also means that any problems in a model used by a contractor will be replicated among its clients, says Gabe Nicholas, a researcher at the Center for Democracy and Technology. “From a free speech perspective, that means if there’s an error on one platform, you can’t take your speech elsewhere. If there’s an error, that error will spread everywhere.” This problem can be compounded if multiple contractors use the same fundamental models.
By outsourcing critical functions to third parties, platforms could also make it harder for people to understand where moderation decisions are made, or for civil society (the think tanks and nonprofits that closely monitor the main platforms) to know where to place responsibility in the event of failures.
“[Many watching] speak as if these big platforms were making the decisions. This is where many people from academia, civil society and government direct their criticism,” Nicholas says. “The idea that we are directing this in the wrong place is a scary thought.”
Historically, large companies like Telus International, Teleperformance and Accenture were responsible for managing a key element of outsourced trust and safety work: content moderation. It often looked like call centers, with large numbers of low-paid employees manually analyzing posts to decide whether they violated a platform’s policies against things like hate speech, spam, and nudity. New trust and security startups are moving more toward automation and artificial intelligence, often specializing in certain types of content or topics (like terrorism or child sexual abuse) or focusing on one medium particular, such as text or video. Others create tools that allow a customer to run various trust and security processes through a single interface.