https://pressanddemocracy.substack.com/p/i-am-admitting-my-m...
https://steady.page/en/journalistiekondervuur/posts/dd6e066f...
Source: https://www.linkedin.com/posts/peter-vandermeersch-a4381b30_...
Dutch: “Dat was niet enkel onzorgvuldig, het was fout.”
English: “That was not just careless—it was wrong.”
I’d say the only difference is the em dash.
Whether you consider it proof of AI is up to y’all.
Very similar to what a rector recently wrote when she got busted giving an AI-generated speech in her inaugural speech in her new university job.
None of it is true, of course. These people are just sorry they got caught.
What? Irresistible quotes? This betrays a terrible way of thinking as a journalist. Basically an admission of wanting to fake news that'd sound good. At that point just write fiction.
How did you read that? Something sounding good and making sense and you wanting it to be true doesn't mean you'd fake it.
Journalists were doing this for decades. Stitching and editing words out of context, to put words into peoples mouths! I will take AI halucinations over journalists halucinations anytime, at least machine has no hostile intent, and is making a geunine error!
Famous last words. What do you think is the main application for AI ? Spreading propaganda.
The most valuable lesson here, by far, is not about other people but about ourselves. This person is trained, takes it seriously, and advocates for making sure the AI is supervised, and got caught in the emotional manipulation of LLM design [0].
We all are at risk. If we look at the other person and mock them, and think we are better than them, we are only exposing ourselves to more risk. If we think - oh my goodness, look what happened, this is perilous - then we gain from what happened and can protect ourselves.
(We might also ask why this valuable tool also includes such manipulative interface. Don't take it for granted; it's not at all necessary for LLMs to work, and they could just as easily sound like a-holes.)
[0] I mean that obviously they are carefully designed to sound appealing
Your defense secretary the other day said, responding to a news article he didn't like: "the sooner Ellison buys it the better" [1].
I don't think this kind of media control happens anywhere in Europe, but prove me wrong.
Unless you're talking about former Soviet countries, that is part of Europe but culturally actually very different and much more aligned with Russia than all of western Europe
1: https://www.nytimes.com/2026/03/13/business/media/pete-hegse...
“Here’s a friendly message that will perfectly convey what you want to say”.
A double PhD friend says she has to talk to chatGPT for all sort of advice and can’t feel safe not doing it, “because you know I’m single and don’t have a companion to spitball my ideas”. She let chatGPT decide which way to take to get to a certain island, and she got stranded because the suggested service didn’t exist.
I have more examples. It’s a fucking mind virus.
Well, now the calculator can tell you a meaning of life, but it'll get 2 + 2 wrong 10% of the time.
LLMs don't do this. they give confident language output, not correct answers.
In some sections of the ecosystem, firms still penalize journalists for errors. In other sections, checking reduces the velocity of attention grabbing headlines. The difference in treatment is… farcical.
We need more good journalists, and more good journalism - but we no longer have ways to subsidize such work. Ads / classifieds are dead, and revenue accrues to only a few.
I have no idea how we square this circle.
Both failed.
I don't think we've gotten to the extent that all popular writing styles (eg. hamburger paragraphs) are considered suspect, but the "it's not just X, it's Y" construction[1] attracts particular scrutiny.
[1] https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing#...
[1] https://xcancel.com/maxwelltani/status/2023089526445371777?
That's why I lost trust and faith in people who end up in positions of doctor, lawyer or judge. When I was young I used to think they must be the smartest most high-IQ people in society, having read the most books and have the highest levels of critical thinking and debate skills ever. When in fact they were only good at memorizing and regurgitating the right information that the school required to pass the exam that gave them that prestigious title and that's it.
Now in my mid 30's when I talk to people from these professions at a beer, barbeque or any other casual gathering, I realize they're really not that sharp or well read or immune propaganda and misinformation, and anyone could be in their place if they put in the grind work at the right time. It's a miracle our society functions at all.
Look at all the influences, streamers, podcasters constantly asking em things and taking it as fact - live.
Isn't the joe Rogan experience like the most watched podcast or something? Every episode I've ever stumbled upon he "fact checks" multiple things via their sponsor which is just an llm provider specialized on news.
People aren't good at statistics. If something is close enough to the truth enough times, and talks authoritively on everything with good English... Guess what, they're gonna trust it.
One of the various oddities going on with LLMs in particular is them being trained with feedback from users having a chance to upvote or downvote responses, or A/B test which of two is "better". This naturally leads to things which are more convincing, though this only loosely correlates to "more correct".
Edit: though I should be clear: people demonstrably do often learn to discount obviously unreliable sources. Not all the time, but pretty often in the easily verifiable cases, especially where they don't have a major emotional stake.
Though if you have a useful response besides "weather the storm while everyone else learns the hard way", I'm listening.
AI output is like that COVID video of contamination, you almost can't avoid it unless you scrupulously check each and every thing that is presented as fact that you are exposed to. And absolutely nobody does that.
Pretty close. I only touched ChatGPT a couple times a few years ago, haven't used the others (on purpose at least. Google forces its Gemini summaries on me but I mostly avoid them, because, umm, see above.)
> and do not interact with others.
Most people I interact with are on the same page about AI. But I try to keep my critical thinking online anyway, like I always have. If someone tried to feed me AI slop, I would consider that person to have betrayed my trust and would, to put it gently, try to interact with them less.
Well, thanks, I guess.
"It's fine, the LLM just lied to you, but hallucinations and making claims based off of assumptions is just something they do and always have done!"
People don't like to feel dumb, and they don't want to feel betrayed by the same tool that gave them incredible factually correct results that one time only to give them complete and utter bullshit(that sounded legitimate) another time.
Also, yeah it feels like its everywhere these days and isn't showing any signs of slowing down(visited my parents and my dads using siri to ask chatgpt stuff now - URGHHHH) and I really hope we're both wrong
We do not live in a meritocracy, because society has no means to judge merit. We live in a society ruled by people who crammed before the tests, and who wrote the papers to agree with and flatter the teacher. Now they are the teachers (and bosses), and
1) expect to be flattered (and LLMs have been built as the ultimate flatterers),
2) feel that a good, ambitious student (or subordinate) will not question them and their work, but instead learn to conform to it, and
3) are not particularly interested in the quality of their work as such, but rather the acceptance of their work. In certain professions, such as judges, doctors, high-level lawyers and engineers, or politicians, they feel like (with good reason) that they can demand acceptance of their work, and punish those who don't accept it.
This position is what they worked so hard as young people for. They were not working to become the best at their jobs. They were working to get the most secure jobs. The most secure jobs are the ones that bad or lazy work doesn't endanger.
lol
I’ve run into a similar problem myself - working with a big transcript, I asked an AI to pull out passages that related to a certain topic, and only because of oddities in the timestamps extracted did I realize that most of the quotes did not exist in the source at all.
e.g.: https://docs.cloud.google.com/vertex-ai/generative-ai/docs/g...
Also, quote-presence testing/linking against source would seem to be a trivial layer to build on a chat interface, no LLM required. Just highlight and link the longest common strings.