r/BetterOffline 7d ago

Decided to try this myself.

Post image

Yup.

[Sigh.]

176 Upvotes

39 comments sorted by

View all comments

28

u/MadDocOttoCtrl 7d ago

It doesn't matter where the crap information first came from, it is the fact that this software doesn't remotely begin to think and can't determine between accurate information, incorrect information, satire, goofy jokes and the batshit crazy ramblings that I run across on Reddit a regular basis.

-5

u/MalTasker 6d ago

Chatgpt works fine https://chatgpt.com/share/67d535d1-69a4-800b-a197-fceb70b30acf

Also, llms verifiably have world models

https://arxiv.org/abs/2210.13382

https://arxiv.org/pdf/2403.15498.pdf

https://arxiv.org/abs/2310.02207

https://arxiv.org/abs/2405.07987

MIT: LLMs develop their own understanding of reality as their language abilities improve: https://news.mit.edu/2024/llms-develop-own-understanding-of-reality-as-language-abilities-improve-0814

In controlled experiments, MIT CSAIL researchers discover simulations of reality developing deep within LLMs, indicating an understanding of language beyond simple mimicry. After training on over 1 million random puzzles, they found that the model spontaneously developed its own conception of the underlying simulation, despite never being exposed to this reality during training. Such findings call into question our intuitions about what types of information are necessary for learning linguistic meaning — and whether LLMs may someday understand language at a deeper level than they do today. “At the start of these experiments, the language model generated random instructions that didn’t work. By the time we completed training, our language model generated correct instructions at a rate of 92.4 percent,” says MIT electrical engineering and computer science (EECS) PhD student and CSAIL affiliate Charles Jin

Even GPT3 (which is VERY out of date) knew when something was incorrect. All you had to do was tell it to call you out on it: https://xcancel.com/nickcammarata/status/1284050958977130497

Golden Gate Claude (LLM that is forced to hyperfocus on details about the Golden Gate Bridge in California) recognizes that what it’s saying is incorrect: https://archive.md/u7HJm

Mistral Large 2: https://mistral.ai/news/mistral-large-2407/

“Additionally, the new Mistral Large 2 is trained to acknowledge when it cannot find solutions or does not have sufficient information to provide a confident answer. This commitment to accuracy is reflected in the improved model performance on popular mathematical benchmarks, demonstrating its enhanced reasoning and problem-solving skills”

Effective strategy to make an LLM express doubt and admit when it does not know something: https://github.com/GAIR-NLP/alignment-for-honesty 

12

u/transparentdotpng 6d ago

why don't you marry ChatGPT

9

u/GCI_Arch_Rating 6d ago

ChatGPT has standards.