

Discover more from Mir's Data .Report
Can Elon Musk Hallucinate?
Or why we should revisit the Turing test as a measure of true Artificial Intelligence
There has been a lot of talk about LLMs, and ChatGPT, hallucinating. But what if hallucination is actually the true measure of intelligence?
Learning to Hallucinate
Image the machine going through a learning process. At first it has very little data and its predictions are all inaccurate. But then it acquires a bit more data, and while the majority of its predictions are still inaccurate, it gets rewarded for some of the ones that are correct. An external group of customers and scientists reward this machine with tangible “goodies”.
As an example, this machine is totally off on subjects related to foreign languages and cultures, but is spot on everything related to hard sciences and math. So our machine gets rewards in those areas. And as it digs deeper into them, analyzing more data and making predictions about the world, it get better and better. It becomes so good in the area of science that when asked about its confidence, it states that it is absolutely confident in its responses (98% confident).
But then we ask this machine about its confidence in subjects related to foreign cultures or languages, and it too claims its confidence is 98% - because, well, its confidence is uniform, despite consistently producing errors in those areas.
The machine does not start to exhibit less confidence just because it is probed on another subject area. Its confidence is a measure of how it is evaluated and rewarded. And thus far, its rewards have all been associated with how well its excels in sciences and math - its core subject areas.
its rewards have all been associated with how well its excels in sciences and math - its core subject areas.
And thus an ability to hallucinate is born!
Hallucination - a key “Human” trait?
But imagine for a minute, that we replace foreign cultures with “all-things” Government, and the machine with Elon Musk.
An incredibly intelligent man, who in his life made a lot of accurate predictions in the areas related to technology and entrepreneurship. He also made many more inaccurate predictions, but all while getting rewarded for the correct ones. And, yes, that effected his confidence. And he has grown to be very confident about everything (98% confident).
And then, suddenly, he is on the spot, sharing his opinions in mini-tweets about topics he, maybe, has little knowledge about. And what do you know - he responds with something that he intimately believes to know for sure (he is 98% confident in everything, after all), but which can be entirely untrue and wrong.
George Soros
Whatever your opinion of George Soros, there is so much wrong with the above tweet that it is worth pointing out that this is indeed a hallucination.
Yes, Soros’ position is something you, I, Elon, can disagree with. We might want to be harsher on crime, to see cities such as San Francisco flourish again.
And, yes, apparently Soros funds DAs. But that is an exaggeration in both relative and absolutely terms. Politico estimates this at $3mil (compare this to $400mil donated by Mark Zuckerberg to various government-related non-profits).
Soros is a major activist financier. And that alone puts him in a negative light - right there with Bear Stearns, Lehman Brothers, etc. In general, it is hard to find a financier who is going to be popular unless they wear a hoodie and call themselves a “Founder”.
In general, it is hard to find a financier who is going to be popular unless they wear a hoodie and call themselves a “Founder”
Others have further pointed out that Soros actually has a track record of supporting civilization - directly, with funding for education, which, in full disclosure, yours truly is a direct benefactor of.
So, yeah, the facts (i.e. Soros has an active citizen position, which he backs up with less than 1% of his total wealth) just don’t support the claim that “Soros wants to erode the very fabric of civilization”.
It is simply NOT accurate and NOT correct to single out George Soros like that in front of an audience of Millions of people. Period.
Turing Test
For ages we have relied to Alan Turing’s test to assess whether a machine or system mimics human intelligence. But I want to suggest that actually the ability to hallucinate is a perhaps a better measure.
Elon Musk, of course, is not the only one who hallucinates. A car driver who tells a police officer that he was driving below speed limit could be hallucinating, truly believing his/her own words despite contrary evidence. And likewise, a police officer who wrongfully stops a car driver could be hallucinating in his/her claim that driver was exceeding the speed limit.
Hallucination is such a key human trait that one could make an argument this being a key survival feature, without which we would not have any ability to continue taking risks and making mistakes.
Hallucination… without which we would not have any ability to continue taking risks and making mistakes
And those of us who take more risks and make more mistakes - people like Elon Musk - perhaps have to be able to hallucinate more on average in order to continue learning and surviving despite all the mistakes.
Which brings us to an important point 👇🏽
If hallucination is what helps humans learn - should this not be the measure of Artificial Intelligence?
The concept of hallucination as a measure of artificial intelligence raises interesting questions about the nature of learning and intelligence. What if the test for true AI was never the Turing test but whether AI can hallucinate?
Hallucination is a byproduct of the learning process, where an entity, be it a machine or a human, receives rewards and recognition for correct predictions in specific domains. This reinforcement leads to an inflated sense of confidence, causing the entity to extend its predictions beyond its area of expertise, effectively hallucinating.
Hallucination, by definition, involves perceiving things that are not based on reality or evidence - a key, and important trait for someone or something taking large risks, with potential for novel insights. To use the example of Elon Musk, the man demonstrates an unwavering confidence in his opinions despite lacking expertise in many subjects. This suggests that hallucination could be a characteristic helping us learn and survive.
Something to think about… Happy Friday!
About the Author
In my former life I was a Data Scientist. I studied Computer Science and Economics for undergrad and then Statistics in graduate school, ending up at MiT, and, among other places, Looker (joined in 2014), a business intelligence company, which Google acquired for its Google Cloud.