Like all folk tales, The Arabian Nights, a collection from many centuries ago, holds cautionary lessons for the age of artificial intelligence. Some of the narratives actually involve robotic entities and mechanical creatures, which some readers feel foreshadow today’s androids and AI.
For me, one of the most thought-provoking stories is the Tale of the Half-Lie. A merchant goes to market to buy a slave, and considers a strong young man called Kafur, but thinks that something is amiss because the asking price is far too low. The seller says that the discount was levied because the youth always tells one lie each year. The merchant shrugs, thinking that most people lie much more frequently, so he signs the purchase contract. During the first six months, Kafur works cheerfully and efficiently, and the merchant congratulates himself on his bargain.
One day, the merchant goes on a potentially profitable venture to a nearby town. He sends Kafur back to bring an item that had been forgotten. When the servant reaches home he starts sobbing and wailing that his poor master had died. He helps the distraught wife to smash all the furniture and valuables. Saying he would collect the body, he sets off for the caravan, and now tells the master that everyone in his household had died. Leaving his business undone, the master rends his hair and rushes home in great distress. When he realizes it was all a hoax, he wants to punish the servant, who calmly says that the contract specified he would tell one lie. The master takes a deep breath and says, alright, at least he wouldn’t have to deal with another lie for another year. “Oh yes, master, you will” says Kafur, “that was just a half-lie, and you won’t know when I will tell the next half.” “What if you told a whole lie,” asked the horrified master, “would even the city be left standing?”
AI suffers from the same half-lie problem, in the form of the disarmingly-charmingly-titled “hallucinations.” A database tracking the appearance of AI-generated lies documents a total of 666 cases related to legal issues, which is certainly an underestimate.
At one point, I prompted ChatGPT for lawsuits dealing with AI torts, and it instantly provided four detailed citations and case briefs — none of which existed. When challenged, it immediately acknowledged, with the same untroubled insouciance of Kafur the slave: “I clearly made multiple failures: 1) Fabricating examples instead of acknowledging the limitations of my knowledge; 2) Providing inaccurate case details and descriptions; 3) Wrongly portraying a case as involving AI when it did not. I made too many unfounded claims and assumptions in an attempt to fulfill the prompt, instead of owning the gaps in my knowledge; I did not engage in factual verification, and overgeneralized from an existing precedent. As an AI, I should be more careful about distinguishing certainties from uncertainties in my responses.”

According to IBM, “The term may seem paradoxical, given that hallucinations are typically associated with human or animal brains, not machines.” However, most of us would be distressed if we realized we were hallucinating; whereas, an LLM engages in telling lies because it just can’t be bothered to find out if its declarations are true or not. The minute you challenge its falsehoods, it will unabashedly own up to the error. As ChatGPT put it: “AI lacks intrinsic motivation; it isn’t “trying” to grow or avoid mistakes. It’s maximizing reward. It doesn’t care if it’s wrong.” That is more akin to a whole-lie than to a “hallucination.”
OpenAI has acknowledged that hallucinations are an inevitable feature of such models. The problem isn’t that AI will not, but that it cannot, distinguish between truth and nontruth. As such, AI models offer far greater potential for expected unexpected anarchy than the mendacious slave in the Arabian Nights, while capably doing our bidding for awhile accompanied by smiling emoticons. When you don’t know which aspect of its assistance is reliable, the entire system unravels as it becomes necessary to assume nothing, trust nothing, verify everything. The only way to effectively use AI is to know more than AI and to oversee every step of the process.
The applications of artificial intelligence are rapidly expanding beyond playing Go or doing your homework, to managing autonomous vehicles, weather forecasting, air traffic control, financial advising, healthcare diagnosis, hiring filters and judicial interventions… The Department of Defense was an early investor, and secretive companies like Palantir use artificial intelligence to flag potential terrorists and criminal behaviour with a flagrant disregard of legal due process.
If we knew when the half-lie would occur, the effect might be forestalled. However, because the nature and timing are unknowable, even a half-lie has the potential for catastrophic outcomes. Indeed, it is just when conditions are the worst — when controllers need to mitigate the harm of aircraft with sudden mechanical problems, or deal with security concerns or infrastructure failures — that AI will be revealed as an unreliable substitute for human oversight. Controllers have to work with a far-flung team to manage unexpected chaos, in a manner that AI cannot replicate. The only way to eliminate the devastation of a half-lie in this context is to eliminate AI in the entire ex ante unknowable process.
For many, “free” use of LLMs comprise a source of magic and wonder, rather like rubbing a magic lamp. But, as the Arabian Nights stories foretell, lamps found in shadowy bazaars should be treated with caution: Djinns tend to bring about unexpected tragedies even while granting our dearest wishes…
