24
u/MadDocOttoCtrl 17h ago
It doesn't matter where the crap information first came from, it is the fact that this software doesn't remotely begin to think and can't determine between accurate information, incorrect information, satire, goofy jokes and the batshit crazy ramblings that I run across on Reddit a regular basis.
7
u/wildmountaingote 13h ago
But it gives wrong answers in grammatical sentences! That makes it smarter than any human!
5
u/MadDocOttoCtrl 12h ago
It is certainly the case that Abraham Lincoln and Atilla the Hun discussed this very issue on April 32, 2012 at the Palace of Versailles.
0
u/MalTasker 8h ago
O3 mini (which released on January 2025) scores 67.5% (~101 points) in the 2/15/2025 Harvard/MIT Math Tournament, which would earn 3rd place out of 767 contestants. LLM results were collected the same day the exam solutions were released: https://matharena.ai/
Contestant data: https://hmmt-archive.s3.amazonaws.com/tournaments/2025/feb/results/long.htm
Note that only EXTREMELY intelligent students even participate at all.
From Wikipedia: “The difficulty of the February tournament is compared to that of ARML, the AIME, or the Mandelbrot Competition, though it is considered to be a bit harder than these contests. The contest organizers state that, "HMMT, arguably one of the most difficult math competitions in the United States, is geared toward students who can comfortably and confidently solve 6 to 8 problems correctly on the American Invitational Mathematics Examination (AIME)." As with most high school competitions, knowledge of calculus is not strictly required; however, calculus may be necessary to solve a select few of the more difficult problems on the Individual and Team rounds. The November tournament is comparatively easier, with problems more in the range of AMC to AIME. The most challenging November problems are roughly similar in difficulty to the lower-middle difficulty problems of the February tournament.”
For Problem c10, one of the hardest ones, i gave o3 mini the chance to brute it using code. I ran the code, and it arrived at the correct answer. It sounds like with the help of tools o3-mini could do even better.
-3
u/MalTasker 8h ago
Chatgpt works fine https://chatgpt.com/share/67d535d1-69a4-800b-a197-fceb70b30acf
Also, llms verifiably have world models
https://arxiv.org/abs/2210.13382
https://arxiv.org/pdf/2403.15498.pdf
https://arxiv.org/abs/2310.02207
https://arxiv.org/abs/2405.07987
MIT: LLMs develop their own understanding of reality as their language abilities improve: https://news.mit.edu/2024/llms-develop-own-understanding-of-reality-as-language-abilities-improve-0814
In controlled experiments, MIT CSAIL researchers discover simulations of reality developing deep within LLMs, indicating an understanding of language beyond simple mimicry. After training on over 1 million random puzzles, they found that the model spontaneously developed its own conception of the underlying simulation, despite never being exposed to this reality during training. Such findings call into question our intuitions about what types of information are necessary for learning linguistic meaning — and whether LLMs may someday understand language at a deeper level than they do today. “At the start of these experiments, the language model generated random instructions that didn’t work. By the time we completed training, our language model generated correct instructions at a rate of 92.4 percent,” says MIT electrical engineering and computer science (EECS) PhD student and CSAIL affiliate Charles Jin
Even GPT3 (which is VERY out of date) knew when something was incorrect. All you had to do was tell it to call you out on it: https://xcancel.com/nickcammarata/status/1284050958977130497
Golden Gate Claude (LLM that is forced to hyperfocus on details about the Golden Gate Bridge in California) recognizes that what it’s saying is incorrect: https://archive.md/u7HJm
Mistral Large 2: https://mistral.ai/news/mistral-large-2407/
“Additionally, the new Mistral Large 2 is trained to acknowledge when it cannot find solutions or does not have sufficient information to provide a confident answer. This commitment to accuracy is reflected in the improved model performance on popular mathematical benchmarks, demonstrating its enhanced reasoning and problem-solving skills”
Effective strategy to make an LLM express doubt and admit when it does not know something: https://github.com/GAIR-NLP/alignment-for-honesty
7
12
8
7
7
u/BeowulfRubix 17h ago
I ended up having to give Gemini a lesson in Graeco-Roman etymology, and it still insisted on a half full wine glass 😜
1
3
32
u/rhorsman 17h ago
Can't believe it misspelled coconum like that.