Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:
- Confident: 57% say the main LLM they use seems to act in a confident way.
- Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
- Sense of humor: 32% say their main LLM seems to have a sense of humor.
- Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
- Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
As far as I can tell from the article, the definition of “smarter” was left to the respondents, and “answers as if it knows many things that I don’t know” is certainly a reasonable definition – even if you understand that, technically speaking, an LLM doesn’t know anything.
As an example, I used ChatGPT just now to help me compose this post, and the answer it gave me seemed pretty “smart”:
The poll is interesting for the other stats it provides, but all the snark about these people being dumber than LLMs is just silly.