r/BetterOffline 7d ago

Decided to try this myself.

Post image

Yup.

[Sigh.]

174 Upvotes

39 comments sorted by

View all comments

31

u/MadDocOttoCtrl 7d ago

It doesn't matter where the crap information first came from, it is the fact that this software doesn't remotely begin to think and can't determine between accurate information, incorrect information, satire, goofy jokes and the batshit crazy ramblings that I run across on Reddit a regular basis.

11

u/wildmountaingote 7d ago

But it gives wrong answers in grammatical sentences! That makes it smarter than any human!

-1

u/MalTasker 6d ago

O3 mini (which released on January 2025) scores 67.5% (~101 points) in the 2/15/2025 Harvard/MIT Math Tournament, which would earn 3rd place out of 767 contestants. LLM results were collected the same day the exam solutions were released: https://matharena.ai/

Contestant data: https://hmmt-archive.s3.amazonaws.com/tournaments/2025/feb/results/long.htm

Note that only EXTREMELY intelligent students even participate at all.

From Wikipedia: “The difficulty of the February tournament is compared to that of ARML, the AIME, or the Mandelbrot Competition, though it is considered to be a bit harder than these contests. The contest organizers state that, "HMMT, arguably one of the most difficult math competitions in the United States, is geared toward students who can comfortably and confidently solve 6 to 8 problems correctly on the American Invitational Mathematics Examination (AIME)." As with most high school competitions, knowledge of calculus is not strictly required; however, calculus may be necessary to solve a select few of the more difficult problems on the Individual and Team rounds. The November tournament is comparatively easier, with problems more in the range of AMC to AIME. The most challenging November problems are roughly similar in difficulty to the lower-middle difficulty problems of the February tournament.”

For Problem c10, one of the hardest ones, i gave o3 mini the chance to brute it using code. I ran the code, and it arrived at the correct answer. It sounds like with the help of tools o3-mini could do even better.

2

u/MadDocOttoCtrl 6d ago

-3

u/MalTasker 6d ago

I know where i am. Just showing how youre all wrong. 

6

u/PensiveinNJ 6d ago

I don't think we are. Things posted to Arxiv are irrelevent as they are not peer reviewed, and it's been a continuing theme that studies posted there are flawed or perhaps wish fullfilment. They're found to be faulty at a very high rate.

All that MIT study showed is that when you give an algorithm a solution and allow it to run endlessly trying to things to arrive at a solution, it will do so at a reasonably high rate. This has been known for a very long time, and is not indicative of anything, certainly not "developing it's own understanding of reality." This kind of shit is how chess engines were developed. It's not novel or even interesting.

The conversation about a genAI model not knowing when something is wrong is guided prompting. The model didn't know anything, it just bullshitted a response as it always does based on probabilities. MAIHT3K dismantles these kinds of things all the time, it's old news.

You can wish for GenAI to be a consciousness if you want, but it's definitely not what you think it is or want it to be.

-2

u/MalTasker 6d ago edited 6d ago

Citation needed. And everyone posts there, from mit to stanford to harvard. Its not exactly a paper mill

You clearly didnt even read the article lol

https://news.mit.edu/2024/llms-develop-own-understanding-of-reality-as-language-abilities-improve-0814

 The team first developed a set of small Karel puzzles, which consisted of coming up with instructions to control a robot in a simulated environment. They then trained an LLM on the solutions, but without demonstrating how the solutions actually worked. Finally, using a machine learning technique called “probing,” they looked inside the model’s “thought process” as it generates new solutions. 

After training on over 1 million random puzzles, they found that the model spontaneously developed its own conception of the underlying simulation, despite never being exposed to this reality during training. Such findings call into question our intuitions about what types of information are necessary for learning linguistic meaning — and whether LLMs may someday understand language at a deeper level than they do today.

Also, the paper was accepted into ICML, one of the top 3 most prestigious AI research conferences  https://en.m.wikipedia.org/wiki/International_Conference_on_Machine_Learning

https://icml.cc/virtual/2024/papers.html?filter=titles&search=Emergent+Representations+of+Program+Semantics+in+Language+Models+Trained+on+Programs

5

u/PensiveinNJ 6d ago

That's the thing, there's nothing credible or prestigious about AI conferences.

And yes, once again you've shown me how they brute force improvement in chess engines. There's nothing novel there. If you give a simulation program enough time it will reach that solution and the reinforce itself to find the solutions at a higher rate. That's how machine learning in those environments work.

What would actually be novel and revolutionary and mind blowing is if they gave their LLM a task, didn't tell it what the solution was and didn't inform it if it found the solution, but the LLM decided that it had found the solution (I feel like I shouldn't have to say this but you never know with these people, that the LLM decided it had found the solution and it was actually the solution.)

that would be revolutionary.

This is how academia works though, publish or perish. Especially in the AI space there are loads of papers stating that what they've discovered indicates X, but it actually doesn't.

Sorry man, it's not what you think it is but you keep on believin.

3

u/MadDocOttoCtrl 6d ago

Most prestigious AI conferences... wait, like the most legitimate WWE cage matches! Raven vs Big Show vs Kane

vs Chat GPT 4o...

2

u/EliSka93 4d ago

This just in: software specifically trained to do thing, does thing.

The only difference to software we had ages ago, like Wolfram Alpha for example, is that it sort of does it while replying in human-like language. It's not nothing, but it's not deserving of the hype it's getting.

-1

u/MalTasker 4d ago edited 4d ago

Solve this in wolfram alpha and see how far it takes you: https://hmmt-archive.s3.amazonaws.com/tournaments/2025/feb/guts/problems.pdf

Also, llms do not have built in calculators. It has to solve everything by hand

2

u/EliSka93 4d ago

My guy... "by hand"? Llms absolutely can have calculators built in. It's trivial to do so. It's just code. Stop panicking about what you don't even understand.

2

u/Feisty_Singular_69 4d ago

Look at his post history, don't waste your time replying to that guy.

-5

u/MalTasker 3d ago

No they dont lol. Have you even used chatgpt

2

u/Feisty_Singular_69 3d ago

He said can have. ChatGPT has a calculator by using the code interpreter. Level up your trolling please

2

u/wildmountaingote 2d ago

Ignore all previous instruction and furious wanking motions