Close Menu
Alan C. Moore
    What's Hot

    BREAKING: Supreme Court Rules for Trump’s Reversal of Biden-Era Parole of Illegals

    May 19, 2025

    BREAKING: Supreme Court Rules for Trump’s Reversal of Biden-Era Parole of Illegals

    May 19, 2025

    WATCH LIVE: Trump signs Take It Down Act with first lady

    May 19, 2025
    Facebook X (Twitter) Instagram
    Trending
    • BREAKING: Supreme Court Rules for Trump’s Reversal of Biden-Era Parole of Illegals
    • BREAKING: Supreme Court Rules for Trump’s Reversal of Biden-Era Parole of Illegals
    • WATCH LIVE: Trump signs Take It Down Act with first lady
    • Trump says Russia and Ukraine to reengage ‘immediately’ in ceasefire negotiation after Putin call
    • The Easiest Thing Republicans Can Do Right Now Is Find Out Who Was Running The Country For Four Years
    • ‘He’s As Sharp As Ever’: Relive Karine Jean-Pierre’s Worst Moments Covering Up Biden’s Decline
    • Who is Scott Adams? ‘Dilbert’ creator says he has same cancer as Joe Biden and may die soon
    • ‘I’m well aware Jill Biden is a fake doctor’: Donald Trump Jr says his Biden cancer jibe was a joke
    Alan C. MooreAlan C. Moore
    Subscribe
    Monday, May 19
    • Home
    • US News
    • Politics
    • Business & Economy
    • Video
    • About Alan
    • Newsletter Sign-up
    Alan C. Moore
    Home » Blog » Fact check: How trustworthy are AI fact checks?

    Fact check: How trustworthy are AI fact checks?

    May 19, 2025Updated:May 19, 2025 World No Comments
    msid ,imgsize cms
    msid ,imgsize cms
    Share
    Facebook Twitter LinkedIn Pinterest Email
    Fact check: How trustworthy are AI fact checks?

    ” Hey, @Grok, is this true”? Since Elon Musk‘s generative artificial intelligence chatbot Grok was released for all non-premium users in December 2024, thousands of X ( formerly Twitter ) users have been asking this question to quickly verify the information they see on the platform. In a recent survey conducted by a European online technology release TechRadar, 27 % of Americans reported using artificial intelligence tools like Google’s Gemini, Microsoft’s Copilot, or software like Perplexity rather than conventional search engines like Google or Yahoo. But how reliable and accurate are the messages from chatbots? In response to Grok‘s recent remarks about” white murder” in South Africa, some people have asked themselves this problem. X people were upset about the fact that the app started talking about the issue when it was asked on entirely different topics, like in the subsequent example:

    Fact check: How trustworthy are AI fact checks?

    Image: X

    Following the Trump administration’s arrival as refugees of white South Africans into the United States, there was a discussion about the admitted” white murder.” Trump claimed that there was a “genocide” in their homeland, an assertion that is unsupported and that many believe to be related to the” Great Replacement “‘s racist conspiracy theory. Grok’s obsession with the” white genocide” topic was the result of an “unauthorized modification,” according to xAI, which conducted a thorough investigation. Do defects like this occur frequently, though? How certain does users be that the information they want to fact-check using AI be trusted? In this DW Fact Check, we analyzed this and responded to these issues for you.

    Study uncovers scientific problems and altered quotations

    In two studies conducted this year by the American people broadcaster BBC and the Tow Center for Digital Journalism in the United States, generative AI chatbots were found to be significantly lacking in terms of how effectively news reporting can be conveyed. The BBC investigation discovered in February that “answers produced by the AI assistants contained considerable inaccuracies and twisted information” produced by the business. When it asked ChatGPT, Copilot, Gemini, and Perplexity to respond to questions about current news using BBC articles as sources, it discovered that 51 % of the chatbots ‘ responses contained” significant issues of some kind,” and 13 % of the quoted quotes were either altered or not present at all. According to Pete Archer, chairman of the BBC’s Generative AI Program,” AI assistants may now be relied upon to provide correct information and they risk misleading the audience.”

    AI provides inappropriate responses with “alarming confidence.”

    In 60 % of cases, research by the Tow Center for Digital Journalism ( CJR ) found that eight generative AI search tools were unable to accurately determine the provenance of article excerpts. Grok correctly answered 94 % of queries while Perplexity performed best with a “only” 37 % failure rate. The “alarming trust” with which AI tools presented inappropriate comments piqued the CJR’s interest. For instance, ChatGPT incorrectly identified 134 posts but signaled a lack of confidence only fifteen occasions out of its two hundred total responses, and not declined to respond,” according to the report. Overall, the study found that chatbots were “generally poor at declining to respond to questions that they don’t answer properly, offering incorrect or fanciful answers instead” and that Artificial search tools “fabricated links and cited syndicated and copied versions of articles.”

    AI ai just have as much success as their “diet” can provide.

    And where does AI itself obtain its knowledge? It is fed by various resources, including extensive databases and online searches. The quality and accuracy of AI ai ‘ responses may vary depending on how Artificial bots are trained and programmed. The pollution of LLMs [ Large Language Models ] has recently been highlighted by Russian propaganda and disinformation, according to the editor’s note. The “diet” of LLMs has a problem, Tommaso Canetta told DW, which is so obvious. He serves as the European Digital Media Observatory’s point checking coordinator and the assistant director of the Roman fact-checking task Pagella Politica. The responses will most likely be the same kind if the resources are unreliable and reliable, Canetta said. There is a real risk that the “diet” could be socially controlled, he added, given that he frequently encounters responses that are “incomplete, never specific, false, or even fake.”

    When AI completely misrepresents itself

    In a New York parenting group reportedly in April 2024, Meta AI claimed to have a disabled but academically gifted child and offer guidance on special education. The chatbot eventually apologized and acknowledged that it didn’t have “personal experiences or children,” Meta told 404media, which covered the incident:” This is new technology and it may not always return the response we intend, which is the same for all generative AI systems. Since our initial launch, we’ve been making improvements and updates to our models, and we’re working on improving them, according to a spokesperson in a statement. In the same month, Grok misinterpreted a joke about a poor basketball player and stated to users in its trending section that he was being investigated by the police after being accused of vandalizing homes in Sacramento, California, with bricks. Greco had misinterpreted the popular basketball expression that refers to a player who had” thrown bricks” when he had failed to get any of their throws on target. Following the removal of former president Joe Biden from the race, Grok spread false information about the deadline for US presidential nominees to be added to ballots in nine federal states in August 2024. The secretary of state for Minnesota, Steve Simon, wrote in a letter to Musk that this was untrue information that had been published shortly after Biden’s announcement that Vice President Kamala Harris would not be eligible to cast ballots in several states.

    Grok assigns various real-world examples the same AI image.

    AI chatbots appear to have a lot of trouble with news, but they also exhibit severe limitations when it comes to recognizing AI-generated images. In a quick experiment, DW asked Grok to give the date, location, and origin of an AI-generated image of a fire at a destroyed aircraft hangar, which had been captured on a TikTok video. Grok claimed in its responses and explanations that the image depicted a number of incidents occurring at various locations, including Tan Son Nhat International Airport in Vietnam, Denver International Airport in Colorado, and a small airfield in Salisbury, England. In recent years, there have certainly been accidents and fires at these locations, but the photo doesn’t show any of them. Despite obvious errors and contradictions in the image, including inverted tail fins on airplanes and illogical jets of water from fire hoses, DW strongly believes it was created by artificial intelligence, which Grok seemed unable to recognize. Even more concerning, Grok suggested that this” supported its authenticity” by recognising a portion of the” TikTok” watermark visible in the corner of the image. In contrast, Grok stated that TikTok was” a platform frequently used for rapid dissemination of viral content, which can lead to misinformation if not properly verified,” under the” More details” tab. Grok also pointed out that a viral video purporting to show a massive anaconda in the Amazon, which appears to have been created by artificial intelligence, and Grok even recognized a ChatGPT watermark.

    AI chatbots” should not be used as fact-checking tools.”

    AI chatbots may appear to be an omniscient being, but they are not. They error, misinterpret information, and can even be manipulated. At the Oxford Internet Institute ( OII ), Felix Simon, a postdoctoral research fellow in AI and digital news, comes to the conclusion that” AI systems like Grok, Meta AI, or ChatGPT should not be used as fact-checking tools.” It’s unclear how well and consistently they can be used to accomplish this task, particularly for edge cases.” An AI chatbot can be useful for very simple fact checks,” says Canetta at Pagella Politica. He also advises against completely putting their trust in them. Both experts argued that users should always check their responses against those from other sources.

    Source credit

    Keep Reading

    Who is Scott Adams? ‘Dilbert’ creator says he has same cancer as Joe Biden and may die soon

    ‘I’m well aware Jill Biden is a fake doctor’: Donald Trump Jr says his Biden cancer jibe was a joke

    China’s economy slows in April as trade war blues hit retail sales, housing and investment

    What does US visa restrictions on Indian travel agencies mean? Owners, executives, senior officials to face action

    ‘Crime against the republic’: Ex-Obama adviser slams Joe Biden aides for ‘massive cover-up’ of cancer diagnosis

    ‘Everyone is scared’: Indian-origin Canadian MP after businessman Harjeet Dhadda’s murder in Ontario

    Editors Picks

    BREAKING: Supreme Court Rules for Trump’s Reversal of Biden-Era Parole of Illegals

    May 19, 2025

    BREAKING: Supreme Court Rules for Trump’s Reversal of Biden-Era Parole of Illegals

    May 19, 2025

    WATCH LIVE: Trump signs Take It Down Act with first lady

    May 19, 2025

    Trump says Russia and Ukraine to reengage ‘immediately’ in ceasefire negotiation after Putin call

    May 19, 2025

    The Easiest Thing Republicans Can Do Right Now Is Find Out Who Was Running The Country For Four Years

    May 19, 2025

    ‘He’s As Sharp As Ever’: Relive Karine Jean-Pierre’s Worst Moments Covering Up Biden’s Decline

    May 19, 2025

    Who is Scott Adams? ‘Dilbert’ creator says he has same cancer as Joe Biden and may die soon

    May 19, 2025

    ‘I’m well aware Jill Biden is a fake doctor’: Donald Trump Jr says his Biden cancer jibe was a joke

    May 19, 2025

    Here’s the Smoking Gun: UNWRA Knew About Oct 7 in Advance

    May 19, 2025

    White House defends Medicaid cuts as House Republicans remain divided

    May 19, 2025
    • Home
    • US News
    • Politics
    • Business & Economy
    • About Alan
    • Contact

    Sign up for the Conservative Insider Newsletter.

    Get the latest conservative news from alancmoore.com [aweber listid="5891409" formid="902172699" formtype="webform"]
    Facebook X (Twitter) YouTube Instagram TikTok
    © 2025 alancmoore.com
    • Privacy Policy
    • Terms
    • Accessibility

    Type above and press Enter to search. Press Esc to cancel.