Hallucinations are an inevitable feature of AI models. The problem isn’t that AI will not, but that it cannot, distinguish between truth and nontruth. As such, AI models can be expected to cause unexpected anarchy, benignly accompanied by smiling emoticons. The only way to effectively use AI is to know more than AI and to oversee every step of the process. When you don’t know which aspect of its assistance is reliable, the entire system unravels as it becomes necessary to assume nothing, trust nothing, verify everything.
Artificial Intelligence
Grand Theft AI? Why ChatGPT is Not a Copyright Pirate
What do authors John Grisham and Ta-Nehisi Coates, performers Billie Eilish and Stevie Wonder, visual artist Sarah Andersen, and corporations like Getty Images, the New York Times, and Sony Music, all have in common? (Along with a long tail of others you probably have never heard of, such as writers Andrea Bartz, Charles Graeber and […]
The Inverse Turing Test: Who’s winning the human race?
Can we distinguish whether a creative work was by a human? Conversation with ChatGPT often exhibits rare insight, analytical and philosophical awareness; not to mention an almost British sense of humour. Research indicates that AI’s behaviour can be more altruistic and cooperative than that of the average human.


