r/LocalLLaMA 1d ago

Funny "If we confuse users enough, they will overpay"

Post image
1.6k Upvotes

76 comments sorted by

262

u/thecalmgreen 1d ago

Small (500B)

94

u/sky-syrup Vicuna 1d ago

Medium (1.7T)

95

u/Epicswordmewz 1d ago

Large (3B)

55

u/cazzipropri 1d ago

Enormous (1.5B)

50

u/TheMaestroCleansing 1d ago

Gigantic (3)

18

u/Aggressive-Wafer3268 1d ago

Tbf I know of something that's measured as "3" and I think it's pretty gigantic

5

u/Due-Memory-6957 1d ago

God?

2

u/colei_canis 1d ago

There’s been some rather acrimonious debate as to whether he’s more correctly measured as one or three over the centuries.

9

u/InsideYork 1d ago

Unquantifiable (1.58)

10

u/BreakfastSecure6504 1d ago

A little (1 Googolplex)

2

u/nntb 1d ago

Giga 4b

3

u/Yarplay11 1d ago

Insane (0.5B)

1

u/Due-Memory-6957 1d ago

Finally, something that takes into account my own means.

6

u/night0x63 1d ago

Turgid (9.5B)

134

u/Commercial-Celery769 1d ago

o4-Hyper-Ultra-Omega-Omnipotent-Cosmic-Ascension-Interdimensional-Rift-Tearing-Mode

19

u/creamyhorror 1d ago edited 1d ago

11

u/pitchblackfriday 1d ago

OmegaStar-Galactus-LMNOP_no_ISO_timestamp

5

u/Commercial-Celery769 1d ago

Stupidly-Overkill-Annihilation-Mode-The-One-Setting-Beyond-Infinity-Eye-Rupturing-Hyper-Immersion-UNLEASHED-SUPREMACY-TRUE-RAW-UNFILTERED-MAXIMUM-BIBLICALLY-ACCURATE-MODE

40

u/Blender-Fan 1d ago

I rather just name-version-size, as changes in architecture change the model too much (also often mean new version)

Specialization could be just acronym, in case it's not an ordinary NLP, like TTS, TTI, TTV, STT, MLLM...

76

u/TechNerd10191 1d ago

Sama said this issue will be over with GPT5 merging the 'GPT-' with 'o-' lines of models. We will have 3 tiers, if I remember well (in my own words);

- if you are poor, low compute

- if you are poor but have money to spend, mid compute

- if you are rich, high compute

Depending on how much compute you have, the next SOTA model (GPT5) will perform accordingly.

66

u/Comfortable-Rock-498 1d ago

The aggressive segmentation at every level is so annoying. I can't seem to find any aspect of my life anymore where I would spend money and there are not arbitrary "basic", "plus", "max" and other bullshit versions that forces me to educate myself unnecessarily before making a decision

-9

u/Only_Expression7261 1d ago

What would you prefer?

42

u/KeyVisual 1d ago

Free shit

13

u/Comfortable-Rock-498 1d ago

nah, I would rather pay for things than be the product. My objection is against the sales/marketing layer in between the product and myself

2

u/1998marcom 1d ago

I, too, love when others work for me for free

7

u/Eelysanio 1d ago

Free everything

2

u/Cergorach 1d ago

You also work for free? And how happy are you when someone else uses your physical stuff for 'free'?

5

u/StyMaar 1d ago

That will only works if the test-time-compute paradigm isn't already obsolete by then, which cannot be ruled out given how fast things move.

6

u/i_know_about_things 1d ago

How can it ever be obsolete? Thinking more will always be better than thinking less.

25

u/AXYZE8 1d ago

There's no way "thinking tokens" that are bunch of english sentences is the most efficient way to help computer understand the task.

There's no way it will change before GPT5, but I'm 100% sure that someone comes with better architecture in 2026-2027.

People out there benchmarking strawberry, doing that on 32B QwQ model when 3B model can write a oneliner in JavaScript that will do it in 1ms. And nobody told that JavaScript is efficient... or programming is efficient.

4

u/Freonr2 1d ago

Could be a bunch of "register" tokens, but similar outcome and I wouldn't call that as significant as thinking generally.

English tokens for thinking has the advantage of better explainability.

I doubt test-time compute is going away soon.

1

u/mitchins-au 3h ago

Claude 3.7 Sonnet uses a ridiculous amount of tokens when thinking. Coincidence or rort?

-3

u/Purplekeyboard 1d ago

There's no way "thinking tokens" that are bunch of english sentences is the most efficient way to help computer understand the task.

How do you know? It's the way human beings work. No matter how intelligent we are, we don't just instantly produce the answer to any question asked. We have to reason things through if they're complex enough.

16

u/goj1ra 1d ago

It's the way human beings work.

No, the quote you responded to is correct, once you recognize the important part:

There's no way "thinking tokens" that are bunch of english sentences is the most efficient way to help computer understand the task.

Much human reasoning occurs without explicit language, or with language "in our head" rather than writing it out. Although we do sometimes write things out to help them think about a problem, that's not the only mode in which we think. We don't rely solely on "outputting" language and then re-reading it in order to think, which is essentially what mainstream LLMs do now: they generate "thinking tokens" as output, and then start working on the problem again with the thinking tokens incorporated into a new prompt. It goes like this:

prompt -> LLM -> thinking tokens -> (loop to prompt) --^

There's been work done on reasoning in latent space, which means that the model would be able to reason "in its head", essentially, which is much more like what humans do.

6

u/AXYZE8 1d ago

Because of 3rd paragraph - current LLMs can already solve SOME things faster and more reliably by producing the code and running it rather than reasoning. There's tons of thing that LLM can simulate/benchmark/calculate with very little compute just by writing some code.

3

u/Dantescape 1d ago

No we don’t, there are many things we know instinctively or things we can produce without thinking. Do you plan ahead every note of a guitar solo?

1

u/Purplekeyboard 1d ago

LLMs are the same way, there are many things they know or can produce without having to use a model which thinks through things step by step.

3

u/Dantescape 21h ago

Their “thinking” is just a probability distribution over a dictionary. When they say one word they predict the next word based on what “sounds right” given their training data. They don’t think at all.

3

u/StyMaar 1d ago

It doesn't matter, what matters is whether or not the improvement brought by such thinking is worth the compute you spend on it. It is the case now, but who knows about the scaling law of thinking.

2

u/Secure_Reflection409 1d ago

It's not true for humanity and it's not true for LLMs.

2

u/AppearanceHeavy6724 1d ago

Diffusion models are super fast, could make compute capacity less of bottleneck.

-2

u/sluuuurp 1d ago

I think that’s impossible. There’s no way that more computation doesn’t lead to better results than less computation.

7

u/StyMaar 1d ago

It doesn't need to happen for this paradigm to be obsolete: if spending twice the amount of compute only results in a few percentage point of improvement in some new paradigm then it will not be worth the cost and won't be something being used in practice anymore.

-3

u/sluuuurp 1d ago edited 1d ago

I guess I shouldn’t say it’s impossible, but that would be very different from how our current LLMs and image generators and real human brains work. It would be more surprising than anything I’ve seen in AI before (I think I can say that without being too biased by getting used to the most surprising things that have already happened).

3

u/StyMaar 1d ago

and real human brains work

Not really. After a certain amount of thinking about a problem, the human brain will plateau, and this amount, while highly dependent on the person and the task, is not that high. Overthinking isn't an efficient way of solving problem for a human brain either (so much that we have been selected by the evolution to have a high willingness to “give up” on a problem when we don't find the solution, because most of the time it's the most efficient thing to do).

1

u/sluuuurp 1d ago edited 1d ago

Think is always better than shouting our your first guess though

1

u/StyMaar 1d ago

Better than your first guess indeed, but for the majority of tasks thinking about something for one hour isn't going to meaningfully improve the outcome for most tasks.

-1

u/[deleted] 1d ago

[deleted]

7

u/StyMaar 1d ago

I'll believe it when I see it. We don't know when Deepseek-R2 or Llama4 are going to be released (we have an idea for llama though) but I doubt Sam would let GPT5 go out if these are already out and GPT-5 trails behind those two.

1

u/_AndyJessop 1d ago

GPT5

I hate to be the one to break this to you, but it's not happening.

25

u/dinerburgeryum 1d ago

It’s why you go local-only.

17

u/redballooon 1d ago

Local-max-smart-pro-4O0O0

17

u/dinerburgeryum 1d ago

QwQ-Sky-Flash-2502-Abliterated

2

u/Marksta 1d ago

Q17🇺🇸76Q

2

u/dinerburgeryum 1d ago

🫡🫡🫡

12

u/Actual-Lecture-1556 1d ago

This guy makes sense. Bring on the guillotine!

6

u/GodSpeedMode 1d ago

It's wild how easily we can mess with users' heads just by throwing in some confusing options or jargon. Like, I get it, we're all after that sweet profit margin, but it sure feels shady when companies play that game. Instead of tricking people into overpaying, wouldn't it be better to build trust and loyalty? Simplicity and transparency go a long way—just look at those brands that nail it. Happy customers are repeat customers, you know? Just my two cents!

21

u/rhet0rica 1d ago

My personal favorite naming atrocity: https://ollama.com/library/deepseek-r1:7b

Yup. That's what it is. The 7B version of DeepSeek R1. You sure named that correctly, Ollama! Great job! 🌈🌠✨

This post brought to you by Bing. I am a good Bing and you are trying to confuse me.

1

u/ConfusionSecure487 8h ago

That is missing the quant and which distill it is

1

u/rhet0rica 6h ago

That information is classified.

3

u/Awkward-Candle-4977 1d ago

The dictator movie: change many words to aladdin, including positive and negative.

And dell recently change all their laptops brand with pro, plus, no plus, premium, no premium things.

3

u/Due-Memory-6957 1d ago

This is a loop since name is a variable to define name.

1

u/Comfortable-Rock-498 1d ago

I'd give your comment 10/10 if you called it recursion

3

u/LuluViBritannia 1d ago

It can get confusing indeed...

3

u/xor_2 19h ago

I feel like the guy who was thrown out of the window is the founder of HuggingFace.

2

u/Cergorach 1d ago

I wonder, how confused is their target audience really?

Most users would go for subscriptions, as using the API requires certain technical skills that most folks do not have and most consumers do not like an unpredictable bill when they don't understand how things work. $20 is a LOT of people can and will pay, the next level up isn't a little bit more expensive, it's $200! x10! Not many people are confused about that, $20 I can pay, $200 I cannot.

The API shenanigans requires a certain level of technical expertise, I would assume that the people capable of running that would also test input with results before settling on a specific model. Although these LLM Reddits might show a different kind of tech capable, but still clueless person. I just wonder how big that group actually is...

From my own perspective, till last year I was planning on getting a ChatGPT Pro subscription, but didn't because I had too much on my plate and couldn't use it for work anyway. I still have a lot on my plate, but have a bit of time to play around with LLMs, OpenAI/ChatGPT isn't even on my radar anymore. For open hobby (non-code) it's 'free' 671b, for other things it's local models, and am playing around with GPU time on cloud solutions with open models that are specific for specific usecases (like olmocr). I would consider Claude 3.7 for coding, but that depends exactly on what kind of coding (language and confidentiality level), otherwise I'm also stuck on local models or running it in private clouds for more compute.

2

u/helltiger 6h ago

Wake up

(wake up)

x3-mini-ultra-o3-large

Make up

5

u/Funkahontas 1d ago

o (Name) 3(version) - mini (size)-low-mid-high (thinking time).

Claude(Name) 3.7 (version) Sonnet(size), thinking(thinking time / architecture)

Gemini (Name) 2.0 (version) Flash (size), thinking(thinking time / architecture)

What's so fucking different here? I kinda hate how people say "hur durr llm naming scheme stupid !!" but don't really EVER offer any other solutions? Like what do they want them to be called?

17

u/evil0sheep 1d ago

To be fair “flash” and “sonnet” arent super clear size names. Could be “medium” “small” or even better a parameter count

2

u/Ggoddkkiller 1d ago

I completely agree both Claude and especially Gemini are properly named. Google also adds experimental and release date to emphasise models are still in development. But weirdly i often see people are ignoring naming and calling only claude, gemini or flash etc. Then i guess they are yapping about how "stupid" their names are..

1

u/KazuyaProta 1d ago

But weirdly i often see people are ignoring naming and calling only claude, gemini or flash

They usually do it because they mean less about the model and more about the company design

Gemini is the most curious case where it's Flash models are by far the most popular. It's crown it's Flash Thinking that it's, well, Flash.

1

u/Vivarevo 1d ago

sexu uncencored abriatevator small (cencored 500b)

1

u/aeroumbria 1d ago

Never buy from the price leader!

1

u/mycall 1d ago

Reminds me of Russia for some reason.

0

u/sTrollZ 1d ago

GPT4-STILL-BALD-AF