Fact checkers work to combat misinformation with AI

Fact checkers work to combat misinformation with AI

“[LLMs] I don’t know what facts are,” says Andy Dudfield, head of AI at Full Fact, a UK fact-checking charity, which has also used a BERT model to automate parts of its reporting workflow. fact checking. “[Fact-checking] is a very subtle world of context and caveats.

Although the AI ​​may appear to formulate arguments and conclusions, it does not actually make complex judgments, meaning it cannot, for example, assess the truth of a statement.

LLMs also lack knowledge of daily events, meaning they are not particularly useful for checking the latest news. “They know everything about Wikipedia but they don’t know what happened last week,” says Newtral’s Míguez. “It’s a big deal.”

As a result, fully automated fact-checking is “very far away,” says Michael Schlichtkrull, a postdoctoral researcher in automated fact-checking at the University of Cambridge. “A combined system in which a human and a machine work together, such as a cyborg fact-checker, [is] something that is already happening and something we will see more of in the coming years.

But Míguez believes that further advances are within reach. “When we started working on this problem at Newtral, the question was whether we could automate fact-checking. The question now is when we can fully automate fact-checking. Our main interest now is how to speed up this process, because fake technologies are advancing faster than technologies to detect disinformation.”

Fact-checkers and researchers say there is a real urgency to look for tools to scale up and speed up their work, as generative AI increases the volume of misinformation online by automating the process of producing lies.

In January 2023, researchers at NewsGuard, a fact-checking technology company, inserted 100 prompts into ChatGPT regarding common false narratives about American politics and health care. In 80% of its responses, the chatbot produced false and misleading claims.

OpenAI declined to make an attributable comment.

Because of the volume of misinformation already online, which fuels the training models for large language models, people who use them may also inadvertently spread falsehoods. “Generative AI creates a world where anyone can create and spread misinformation. Even if they don’t intend to,” Gordon says.

As the problem of automated disinformation grows, the resources available to deal with it are under pressure.

Former UK defense secretary named partner of UK defense and cybersecurity company

Former UK defense secretary named partner of UK defense and cybersecurity company

Tech Layoffs Signal End of Office Perks

Tech Layoffs Signal End of Office Perks

Leave a Reply

Your email address will not be published. Required fields are marked *