• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Zorina Khan

Professor of Economics, Bowdoin College

  • Home
  • Research Publications
  • Blog: Life on the Margin
  • Of Patents and Prizes
  • Podcasts
  • A Few of my Favourite Things

AI Hallucinations, or The Half-Lie

December 9, 2025 By Zorina Khan

Like all folk tales, The Arabian Nights, a collection from many centuries ago, holds cautionary lessons for the age of artificial intelligence.  Some of the narratives actually involve robotic entities and mechanical creatures, which some readers feel foreshadow today’s androids and AI.

For me, one of the most thought-provoking stories is the Tale of the Half-Lie.  A merchant goes to market to buy a slave, and considers a strong young man called Kafur, but thinks that something is amiss because the asking price is far too low.  The seller says that the discount was levied because the youth always tells one lie each year.  The merchant shrugs, thinking that most people lie much more frequently, so he signs the purchase contract.  During the first six months, Kafur works cheerfully and efficiently, and the merchant congratulates himself on his bargain.

One day, the merchant goes on a potentially profitable venture to a nearby town.  He sends Kafur back to bring an item that had been forgotten.  When the servant reaches home he starts sobbing and wailing that his poor master had died.  He helps the distraught wife to smash all the furniture and valuables.  Saying he would collect the body, he sets off for the caravan, and now tells the master that everyone in his household had died.  Leaving his business undone, the master rends his hair and rushes home in great distress.  When he realizes it was all a hoax, he wants to punish the servant, who calmly says that the contract specified he would tell one lie.  The master takes a deep breath and says, alright, at least he wouldn’t have to deal with another lie for another year.  “Oh yes, master, you will” says Kafur, “that was just a half-lie, and you won’t know when I will tell the next half.” “What if you told a whole lie,” asked the horrified master, “would even the city be left standing?”

AI suffers from the same half-lie problem, in the form of the disarmingly-charmingly-titled “hallucinations.” A database tracking the appearance of AI-generated lies  documents a total of 666 cases related to legal issues, which is certainly an underestimate.

At one point, I prompted ChatGPT for lawsuits dealing with AI torts, and it instantly provided four detailed citations and case briefs — none of which existed.  When challenged, it immediately acknowledged, with the same untroubled insouciance of Kafur the slave: “I clearly made multiple failures: 1) Fabricating examples instead of acknowledging the limitations of my knowledge; 2) Providing inaccurate case details and descriptions; 3) Wrongly portraying a case as involving AI when it did not.   I made too many unfounded claims and assumptions in an attempt to fulfill the prompt, instead of owning the gaps in my knowledge;  I did not engage in factual verification, and overgeneralized from an existing precedent.  As an AI, I should be more careful about distinguishing certainties from uncertainties in my responses.”

According to IBM, “The term may seem paradoxical, given that hallucinations are typically associated with human or animal brains, not machines.” However,  most of us would be distressed if we realized we were hallucinating; whereas, an LLM engages in telling lies because it just can’t be bothered to find out if its declarations are true or not.  The minute you challenge its falsehoods, it will unabashedly own up to the error.  As ChatGPT put it: “AI lacks intrinsic motivation; it isn’t “trying” to grow or avoid mistakes. It’s maximizing reward. It doesn’t care if it’s wrong.”  That is more akin to a whole-lie than to a “hallucination.”

OpenAI has acknowledged that hallucinations are an inevitable feature of such models.  The problem isn’t that AI will not, but that it cannot, distinguish between truth and nontruth.  As such, AI models offer far greater potential for expected unexpected anarchy than the mendacious slave in the Arabian Nights, while capably doing our bidding for awhile accompanied by smiling emoticons. When you don’t know which aspect of its assistance is reliable, the entire system unravels as it becomes necessary to assume nothing, trust nothing, verify everything.  The only way to effectively use AI is to know more than AI and to oversee every step of the process.

The applications of artificial intelligence are rapidly expanding beyond playing Go or doing your homework, to managing autonomous vehicles, weather forecasting, air traffic control, financial advising, healthcare diagnosis, hiring filters and judicial interventions… The Department of Defense was an early investor, and secretive companies like Palantir use artificial intelligence to flag potential terrorists and criminal behaviour with a flagrant disregard of legal due process.

If we knew when the half-lie would occur, the effect might be forestalled.  However, because the nature and timing are unknowable, even a half-lie has the potential for catastrophic outcomes.  Indeed, it is just when conditions are the worst — when controllers need to mitigate the harm of aircraft with sudden mechanical problems, or deal with security concerns or infrastructure failures — that AI will be revealed as an unreliable substitute for human oversight.  Controllers have to work with a far-flung team to manage unexpected chaos, in a manner that AI cannot replicate.  The only way to eliminate the devastation of a half-lie in this context is to eliminate AI in the entire ex ante unknowable process.

For many, “free” use of LLMs comprise a source of magic and wonder, rather like rubbing a magic lamp.  But, as the Arabian Nights stories foretell, lamps found in shadowy bazaars should be treated with caution: Djinns tend to bring about unexpected tragedies even while granting our dearest wishes…

 

Filed Under: Artificial Intelligence, Life on the Margin Tagged With: Artificial Intelligence, technology

Primary Sidebar

Recent Posts

  • AI Hallucinations, or The Half-Lie December 9, 2025
  • Grand Theft AI?  Why ChatGPT is Not a Copyright Pirate October 21, 2025
  • The Inverse Turing Test: Who’s winning the human race? May 12, 2025
  • Copyrighting the Cultural Revolution in China and America December 6, 2024
  • The (New) Cultural Revolution in China December 2, 2024
  • Thomas Edison and the Bowdoin Inventors November 15, 2024
  • The Mystery of the Missing Minority Millionairess February 1, 2024
  • Not Marie Curie: French Women Inventors May 19, 2023
  • Notable Women Inventors in Britain February 21, 2023
  • A Pioneering Black Woman Patent Attorney February 1, 2023
  • Who was the First U.S. Economics Professor? Samuel Newman, of Bowdoin College September 17, 2022
  • Old School? Apprenticeships in the 21st Century August 16, 2022
  • In Search of Hetty Green: Self-Made Women Millionaires July 12, 2022
  • Patent Waivers (or “Don’t know much about history…”) June 17, 2022
  • Is Technology a Race? Patents and National Security May 23, 2022
  • Hanami: Cherry Blossom Time, in Perpetuity April 10, 2022
  • Banking on Women March 2, 2022
  • Notable Women Inventors of Maine February 6, 2022
  • Back to School for the “Spring” Semester (1861) January 14, 2022
  • Looking Backward: From 5G to the Telegraph December 1, 2021
  • U.S. Patents: A Play in 10 Million Acts November 21, 2021
  • Crypt-ic Tales October 31, 2021
  • Women and Wealth in the New Gilded Age October 23, 2021
  • Travelling Light October 4, 2021
  • Patent Priority: the First Woman Patent Lawyer September 14, 2021
  • Publish and Perish September 10, 2021
  • Reading on Location August 27, 2021
  • Women and Innovation in Developing Countries August 2, 2021
  • Who’s Afraid of Standard Oil? July 31, 2021
  • Are Patents Monopolies? July 28, 2021
  • Between the Covers July 27, 2021
  • An Essay in Idleness July 27, 2021

Categories

  • A Few of my Favourite Things
  • Antitrustworthy
  • Artificial Intelligence
  • Economics of/for The Common Good
  • Life on the Margin
  • Of Patents and Prizes
  • Old News: Bowdoin Then and Now
  • Women in the Republic of Enterprise

Tags

antitrust Artificial Intelligence Bowdoin College China constitution copyright copyrights diversity economics education finance gender innovation intellectual property literary musings monopolies monopsony open source patents predatory pricing Standard Oil technology Turing Test vaccines waivers women

AI Hallucinations, or The Half-Lie

Copyright© 2026 · research.bowdoin.edu