There is a reason why they’re called LLM - large language models
They don’t understand anything, they’re just getting the statically best result depending on their training data
At least AI is isn’t just LLMs, so the technology isn’t dead, but LLMs are just word generators and can’t really reason about/understand what they’re trying to tell.
Didn’t AI literally have one job when it started out?
How the hell does it fail at math? How is that even possible lmao
There is a reason why they’re called LLM - large language models
They don’t understand anything, they’re just getting the statically best result depending on their training data
At least AI is isn’t just LLMs, so the technology isn’t dead, but LLMs are just word generators and can’t really reason about/understand what they’re trying to tell.
It feels a bit like modern tarot, to be honest
Vibe mathsing.
Because it is not large math models but large language models. Language is always ambiguous.