Posted on

"By treating ChatGPT and similar LLMs as being in any way concerned with truth, or by speaking metaphorically as if they make mistakes or suffer “hallucinations” in pursuit of true claims, we risk exactly this acceptance of bullshit, and this squandering of meaning."

The entire article titled ChatGPT is Bullshit and published in Ethics and Information Technology (2024) is extremely quotable.

The word bullshit, used throughout the article, has a very specific meaning, as defined by Harry Frankfurt (American philosopher) in his essay On Bullshit:

"What bullshit essentially misrepresents is neither the state of affairs to which it refers nor the beliefs of the speaker concerning that state of affairs. Those are what lies misrepresent, by virtue of being false. Since bullshit need not be false, it differs from lies in its misrepresentational intent. The bullshitter may not deceive us, or even intend to do so, either about the facts or about what he takes the facts to be. What he does necessarily attempt to deceive us about is his enterprise. His only indispensably distinctive characteristic is that in a certain way he misrepresents what he is up to."

The ChatGPT is Bullshit paper then goes on to prove the following:

  1. That ChatGPT is bullshitting.
  2. That various mitigation methods (like hooking the LLM up to an external database - like Wolfram Alpha - or a search engine, to correct its wrong answers) fail to produce truthful / factual information.
  3. That anthropomorphising the LLMs actively deceives investors, policymakers, and members of the general public.

This builds very well on the existing work of Emily Bender et al, titled On the Dangers of Stochastic Parrots, which goes into detail explaining that the output of generative AI is a reproduction. These algorithms are not built with an aim to derive meaning, not with an aim to follow step-by-step reasoning.

Talking about software such as ChatGPT the way we talk about human beings - calling their bullshit output "a mistake" or their made-up content "a hallucination" - is not an innocent foible. It's a deliberate strategy meant to influence both investors and policymakers into viewing generative AI technology based on LLMs as truth-seeking. This is factually a lie, since the technology does not aim to be truthful, nor factual.

This lie has criminal consequences. The military is now using AI to mark human beings for assassinations.

And when it comes to using software such as ChatGPT in newsrooms, a Media Diversity Report from 2023 eloquently summed up the effect: "Generated AI is unable to distinguish between fiction and truth and using them as learning tools is dangerous. Moreover, when used in newsrooms, the review carried out by editors is no guarantee of accuracy."

Sabina-Alexandra Ștefănescu @ 2025 | Built using Zola (and the apollo theme).