There’s a curious trend happening with early AI adopters and avid advocates. They’re no less enthusiastic about the tech, but they are becoming a little more cautious. The rhetoric has changed distinctly from ‘All in on AI” to a more thoughtful and measured approach. The cause: Apparently AI is starting to hallucinate, only it doesn’t know it. It thinks it’s right.
It can be amusing at first to see a response to a prompt that you know is absolute rubbish. But it’s only amusing if you have the knowledge to discern when AI is dreaming up information. The problem comes in with people who are not experts in a particular field. AI is often so confident about its answers, that it’s not even questioned. And that’s where the real problems start – how to sift fact from fiction, and which sources to trust.
What’s fact, what’s fiction?
People have become so accustomed to believing what they see or read online, that when faced with the reality that it might be nonsense, the default is to argue. It used to be Google that was cited. Now it’s ChatGPT, Claude or Grok.
The fact that this is supposedly more advanced technology makes people less inclined to doubt the outputs. Add to this that the vast majority of people don’t know how to identify sources of genuine authority or read academic papers, means that fact checking rarely enters the equation.
That is, until someone with advanced knowledge calls BS. Admittedly, no tech is perfect, programming mistakes are bound to occur. But with each new release promising better results, the general consensus is that it’s actually getting worse. So determined is it, that when AI can’t find an answer, it tends to fabricate one. It then presents it in a rather convincing way – full of confidence.
There have been cases of citing studies that don’t even exist. At the other end of the scale, AI sometimes says that it can’t find the answer to a query – even when the person prompting knows full well that it exists. They were simply hoping AI could expand on what they already knew.
More than bias
Bias in AI is well known. In an effort to simplify processes while scaling rapidly, it was inevitable. But solving the problem of bias seems to be quite the challenge (and not quite the priority either). Misinformation and skewed facts don’t seem to matter to some. Meanwhile trust in the outputs that AI generates is eroding and so are the benefits.
AI can do things faster, more accurately and more efficiently – this has been the promise as more and more AI agents are rolled out. But what happens when it’s not accurate? When the data needs to be checked, how much time does that take? Do efficiency, cost and time saving claims still hold? A 5% or even 10% error rate might be understandable, but some figures show error rates as high as 70%.
Maybe it’s time to dust off the old encyclopaedia’s. They may be old but at least they were created with rigorous fact checking systems. AI companies could learn from that.