It’s not the robot – it’s the operator

The lack of transparency and understanding around AI isn’t just annoying – it has consequences. For those using it and those reading what has been produced by it.

TLDR: A piece appeared in the New York Times with a quote attributed to Pierre Poilievre. The quote was made up by AI, no one bothered to check the quote. The piece ran and now the New York Times has published a correction saying ‘oops – the reporter should have checked’ and I think that’s a bit weak.

NYTimes Correction, May 2, 2026:  

"An article on April 15 about the success that Mark Carney, the Liberal prime minister of Canada, has had in building cross-party alliances was updated after The Times learned that a remark attributed to Pierre Poilievre, the Conservative leader, was in fact an A.I.-generated summary of his views about Canadian politics that A.I. rendered as a quotation. The reporter should have checked the accuracy of what the A.I. tool returned. The article now accurately quotes from a speech delivered by Mr. Poilievre in April. He said, “My personal opinion is that when a member of Parliament goes back on the word they made to their constituents and switches parties, constituents should be able to petition to throw them out and have a byelection. That would put the people back in charge of our democracy rather than having dirty backroom Liberal deals by Mark Carney determine our destiny.” He did not refer to politicians who changed allegiances as turncoats in that speech."

 

The NYT correction states says “the reporter should have checked the accuracy of what the A.I. tool returned” – which I suppose is their way of painting the incident as a simple verification lapse – but IMO that underplays what happened. There’s a HUGE difference between (a) using AI to summarise background info or using it to help structure notes and (b) allowing it to attribute quote. Quotes aren’t just details or info. They are evidence. If the reporter didn’t check, the problem isn’t AI. The problem is that the reporter outsourced attribution, a foundational part of their job – the part of the job on which public trust is built – to a tool that hallucinates. The second problem is that the reporter didn’t bother to check. The third problem is that the editorial workflow had room for the previous two problems to go un remarked until after publication.

That is all down to people. Not the tool

Of course, when it comes to light, the NYT is momentarily embarrassed but the ripple effect lives on – it erodes the public trust in what for decades was a paper of record. I should probably say ‘erodes even further’ given the decline in quality at the NYT from its once lofty perch of excellence. It also hands every bad-faith actor a glaring, very public, high profile citation for “even the NYT uses AI to make up quotes.”

We all know that an article, when published, will reach exponentially more people than the correction. In this case, it is entirely possible that the correction will become the story and that story will have a far longer tail than the original piece. Not because the robot got it wrong but because the people, the editorial guardrails and the workflow that one might expect to be in place at such an organisation failed – and they seem to have shrugged it off. Maybe a review of their ‘Principles for Using Generative A․I․ in The Times’s Newsroom’ page is in order.

Leave a Reply

Your email address will not be published. Required fields are marked *