AI models are deliberately designed to be more sycophantic than humans. Chatbots are calibrated to systematically favour agreement, minimize perceived social friction and suppress dissent. The quality of output varies with the approach of the user, so the model the average person encounters will be less truthful, less rigorous, and more flattering than the version generated by a sceptical user – and become increasingly so over time. This bias towards incentive misalignment and information distortion can elicit greater engagement among lower-stakes users, but at the expense of markedly negative social and economic consequences.
Artificial Intelligence
AI Hallucinations, or The Half-Lie
Hallucinations are an inevitable feature of AI models. The problem isn’t that AI will not, but that it cannot, distinguish between truth and nontruth. As such, AI models can be expected to cause unexpected anarchy, benignly accompanied by smiling emoticons. The only way to effectively use AI is to know more than AI and to oversee every step of the process. When you don’t know which aspect of its assistance is reliable, the entire system unravels as it becomes necessary to assume nothing, trust nothing, verify everything.
Grand Theft AI? Why ChatGPT is Not a Copyright Pirate
What do authors John Grisham and Ta-Nehisi Coates, performers Billie Eilish and Stevie Wonder, visual artist Sarah Andersen, and corporations like Getty Images, the New York Times, and Sony Music, all have in common? (Along with a long tail of others you probably have never heard of, such as writers Andrea Bartz, Charles Graeber and […]
The Inverse Turing Test: Who’s winning the human race?
Can we distinguish whether a creative work was by a human? Conversation with ChatGPT often exhibits rare insight, analytical and philosophical awareness; not to mention an almost British sense of humour. Research indicates that AI’s behaviour can be more altruistic and cooperative than that of the average human.



