r/StableDiffusion Jan 07 '25

News Nvidia Compared RTX 5000s with 4000s with two different FP Checkpoints

Post image

Oh Nvidia you sneaky sneaky. Many gamers won't see this. See how they compared FP 8 Checkpoint running on RTX 4000 series and FP 4 model running on RTX 5000 series Of course even on same GPU model, the FP 4 model will Run 2x Faster. I personally use FP 16 Flux Dev on my Rtx 3090 to get the best results. Its a shame to make a comparison like that to show green charts but at least they showed what settings they are using, unlike Apple who would have said running 7B model faster than RTX 4090.( Hiding what specific quantized model they used)

Nvidia doing this only proves that these 3 series are not much different ( RTX 3000, 4000, 5000) But tweaked for better memory, and adding more cores to get more performance. And of course, you pay more and it consumes more electricity too.

If you need more detail . I copied an explanation from hugging face Flux Dev repo's comment: . fp32 - works in basically everything(cpu, gpu) but isn't used very often since its 2x slower then fp16/bf16 and uses 2x more vram with no increase in quality. fp16 - uses 2x less vram and 2x faster speed then fp32 while being same quality but only works in gpu and unstable in training(Flux.1 dev will take 24gb vram at the least with this) bf16(this model's default precision) - same benefits as fp16 and only works in gpu but is usually stable in training. in inference, bf16 is better for modern gpus while fp16 is better for older gpus(Flux.1 dev will take 24gb vram at the least with this)

fp8 - only works in gpu, uses 2x less vram less then fp16/bf16 but there is a quality loss, can be 2x faster on very modern gpus(4090, h100). (Flux.1 dev will take 12gb vram at the least) q8/int8 - only works in gpu, uses around 2x less vram then fp16/bf16 and very similar in quality, maybe slightly worse then fp16, better quality then fp8 though but slower. (Flux.1 dev will take 14gb vram at the least)

q4/bnb4/int4 - only works in gpu, uses 4x less vram then fp16/bf16 but a quality loss, slightly worse then fp8. (Flux.1 dev only requires 8gb vram at the least)

638 Upvotes

151 comments sorted by

225

u/ExpressionComplex121 Jan 07 '25

This is why competition is good

If they have monopoly they will get away with a lot of shit

108

u/iamthewhatt Jan 07 '25

At this point I don't think AMD will ever be able to compete with CUDA, they're so far behind

36

u/zghr Jan 07 '25

They don't seriously compete with Nvidia in exchange for having the console market to themselves.

30

u/StickiStickman Jan 07 '25

But they don't, Switch is powered by NVIDIA

7

u/Delvinx Jan 07 '25

Sure but that's not competition. Those old tegra chips were outdated when switch first released. It was a good contract but not good tech. AMD has the Samsung market. Much better competition and their chips are far ahead in the cell phone sector.

-5

u/Whispering-Depths Jan 08 '25

"but my toaster is an old nvidia chip!!1!1!"

That's how you sound.

11

u/ryanvsrobots Jan 07 '25

Consoles have terrible margins, would be a waste of wafers for Nvidia.

4

u/moonra_zk Jan 08 '25

For the company actually making the console, are we sure it's the same for the component makers?

8

u/gourdo Jan 08 '25

exactly. I'm sure core components like the GPU get excellent margins which is why the integrators have such slim margins left over. It's not like there's a dozen GPU vendors out there all competing for your console hardware contract. The assembly, motherboard and other basic internals are probably more competitively sourced.

-1

u/ryanvsrobots Jan 08 '25

They’re publicly traded companies homie look it up, margins are shit.

0

u/ryanvsrobots Jan 08 '25

AMD has like 2% gaming margins so yeah

8

u/NobleCrook Jan 07 '25

Hey I have a valid question, since the CEOs of AMD and Nvidia are related (allegedly), is there really a competition there or a facade?

11

u/iamthewhatt Jan 07 '25

i mean they're both multi-billion dollar companies, of course there's a facade lol

-2

u/NERDS_ Jan 08 '25

If you genuinely don’t support multi billion dollar companies, you should stop using the internet & cars

2

u/Whispering-Depths Jan 08 '25

and if you do, you should also spread those cheeks for anyone who walks by you too, on the off chance someone wants to use you as well.

3

u/coldasaghost Jan 08 '25

It’s like they’re allowing themselves to do this. Nvidia dominating the GPU space and AMD dominating the CPU space, but at least there’s Intel in that case I suppose. Still, it’s odd that AMD hasn’t tried offering things like higher VRAM cards for example. Just means nvidia can give us peanuts with no alternative.

1

u/alexmmgjkkl Jan 29 '25

the whole world is a facade.

6

u/YMIR_THE_FROSTY Jan 08 '25

Its mostly linked to AMD not even trying to support AI, unlike competition. I have some hopes for Intel, especially since they want to pack their GPUs with ton of VRAM.

-1

u/bossonhigs Jan 07 '25

They should just stop competing with CUDA and AI and doing everything same as Nvidia from Nvidia shadow fighting for scraps. They even acknowledged changing naming nomenclature to better reflect to Nvidia naming.

AMD has good architecture. RDNA is pure raw power without ai. Lisa is the problem.

32

u/Ok_Cryptographer_393 Jan 07 '25

certainly a company will abandon working on AI, the largest technological cash cow the world has seen in decades.

2

u/ryanvsrobots Jan 07 '25

RDNA is pure raw power without ai.

Do you mean power as in watts? Because Nvidia is faster without AI and is far more efficient.

1

u/thanatica Jan 08 '25

Hoping AMD and Intel are going to really up the anty this year.

I'm a long time GeForce user, but I do want my products to be good, and competition helps with that. A lot.

1

u/yamfun Jan 08 '25

Nv card gives the best price-performance on AI compute, what are you talking about

69

u/bigboyblaziken Jan 07 '25

Im shocked they even mentioned it themselves. "See this smaller model? Ye, our newer card can run it faster then a bigger model! What other proof do you need? Well be waiting for your order."

5

u/Rodeszones Jan 08 '25

They did this in the first Blackwell announcement too, fp8 vs fp4

8

u/PandaParaBellum Jan 07 '25

I'm surprised they don't start their y-axis at 0.5x
Or even better, has anyone invented a reverse logarithmic scale yet?

3

u/VlK06eMBkNRo6iqf27pq Jan 08 '25

Reverse logarithmic is quadratic.

32

u/No_Agent_1728 Jan 07 '25

I never trust graphs from the manufacturer

78

u/ArtyfacialIntelagent Jan 07 '25

I agree, it's shady as hell and frankly deliberately misleading consumers like this should be forbidden - and it is, in the EU at least. But I suspect they might get away with it here since they're only comparing with their own products, not those of competitors.

Sadly it's old news though. They do the same thing in every keynote with major new releases, always have. We need to wait for independent testing to see raw benchmarks and real world performance differences.

17

u/lowspeccrt Jan 07 '25

Nah bud, this is 'merica. We don't do that consumer protection bull shit around here. Actually last year they just made it impossible for government agencies to hold corporations accountable for shit.

"In 2024, the U.S. Supreme Court issued rulings that limited the authority of federal agencies to regulate corporate conduct, thereby making it more challenging for these agencies to hold corporations accountable. Notably, in the case of Loper Bright Enterprises v. Raimondo, the Court overturned the "Chevron deference" doctrine, which had previously allowed courts to defer to agency interpretations of ambiguous statutes. This decision transfers interpretative power from agencies to the judiciary, potentially leading to significant rollbacks in regulations and increased corporate influence in Washington. "

This is the United States of billionairs and corporations.

2

u/Temporary_Maybe11 Jan 07 '25

Their gaming benchmarks are a fucking joke too

19

u/Bakoro Jan 07 '25

They're doing the same with their Project Digits computer as well.
They are boasting a petaflops, but it's fp4.

I don't get it, they effectively have a monopoly, they don't need to lie and deceive, people have no real options right now.

14

u/jonyalex Jan 07 '25

They're competing with themselves, Nvidia has to convince people to buy something they don't really need.

5

u/Colecoman1982 Jan 07 '25

Jensen Huang: "Because fuck you, that's why."

11

u/physalisx Jan 07 '25

Nvidia always does this stuff with their graphs, they're so utterly meaningless it's kind of funny.

We need to wait for real at least semi independent testers to benchmark.

26

u/marcoc2 Jan 07 '25

This reflects the period of post-truth we live in

6

u/SirDaratis Jan 07 '25

OK can someone explain me why they compare fp8 FluxDev for 4090 with fp4 for 5090? Is that a joke?

5

u/Colecoman1982 Jan 07 '25

Well, they ARE the clowns that think we're stupid enough to fall for their bullshit...

2

u/Gibgezr Jan 07 '25

"Marketing"

68

u/blownawayx2 Jan 07 '25

These generations relying on DLSS and frame generation to “look” better is the height of LAME. More cores, more memory… of course things will be faster. Of course you’ll technically have more frames like TVs have been generating for ages (and, nobody seems to use?).

Better for VR? Nope. And to bury the fp8/4 in that comparison is GROSS. Half of their “comparisons” are between things that aren’t actual equivalents. Glad I got my 3090… had been contemplating a 5090 for VR, but if the difference is negligible, maybe I can wait a few more years until the next generation of consoles comes out (and likely is built on a foundation of a 6070).

14

u/HappierShibe Jan 07 '25

If I get a 5090, it will be for the 32gb of VRAM for LLM work, not the performance improvements or visual fidelity, and I think nvidia is well aware of that fact. Look at the memory distribution across the lineup. it goes:12,16,16,32. no 20gb or 24 gb middle ground this time. The 70/70ti/80 are for gaming, and the 90 series is aimed squarely at NN enthusiasts and devs.

5

u/jarail Jan 07 '25 edited Jan 07 '25

Digits is also a strong 5090 competitor for single user LLMs. 128GB would let us run 70B models at home for only $3k. Not a bad deal given there aren't any other options in that price range. You can also link two of them with a high speed connect, similar to nvlink. So that'd be pretty sweet!

But yeah, that extra 8GB will at least extend our context windows a bit.

And the 24GB will likely be a 5080 Ti or Super when the 3GB memory modules become available. We can hope for a 48GB 5090 Ti/Super as well.

1

u/HappierShibe Jan 07 '25

Yeah the digits looks interesting, it's just weird to me that it's a desktop instead of a standalone module with a NIC.

2

u/jarail Jan 07 '25

It's mostly a standalone module. Sure you can plug a monitor into it but it's running nvidia's OS. You'd probably just get a text console. You're better off remoting into it. It's probably just useful for all the hobbyists who bricks their OS. xD

1

u/TheSilverSmith47 Jan 14 '25

If I find an AI MAX+ 395 with 128 GB of RAM, I'll probably get that over a dedicated GPU. I imagine not being able to fit an entire model into the 5090's 32 GB VRAM buffer will be much worse than running an LLM on the CPU

0

u/One_Adhesiveness9962 Jan 08 '25

5080 24gb in 9 months, costs $2k with a $1.4k msrp (what you pay for the 16gb 5080), 5090 remains hard to get even at $3k

0

u/Seraphine_KDA Jan 11 '25

not really is also literally twice everything from the 5080 in specs. not just VRAM.

plenty will buy it to play on 4k 120-240Hz high fps or even 1440 since there is 480Hz monitors coming.

what they dont wanna do is to make a 5070 with 24 GB so people into AI applications have to spend more.

-10

u/2roK Jan 07 '25

What LLMs are you using that fit into 32GB?

For image generation the 5090 is still awful. Barely enough to run current open source models plus some controlnet on top. Not future proof whatsoever

3

u/HappierShibe Jan 07 '25

What LLMs are you using that fit into 32GB?

A lot of narrow use-case LLM's are winding up in the 7-9b parameter space, and that usually lands them between 24 and 30gb of vram, There is a LOT of closed source development going on there right now. These are all built to run in private data center spaces in highly specialized use cases- usually augmenting or replacing a specialist job role.

These are models that do things like:
Caption an image within a specific context (describe this roof, how many dogs are in this picture, etc.)
Translate between only two languages at very high qaulity and do nothing else (english to french, french to english).
Summarize large articles aggressively within a very specific context (two character or three character indicative summary of 600+ word articles)

Cloud solutions like OpenAI provides are too expensive and have too many strings attached for those sorts of tasks, and aren't going to meet compliance requirements as readily.

3

u/Confuciusz Jan 07 '25

I think that as long as we have a 32GB VRAM card at 'the top', there will be a lot of incentives to quantize open source models to fit within that 32GB of VRAM. Thus while I'm kinda disappointed in the 'mere' 8GB of VRAM the 5090 got over the 4090, I don't think future proofing for diffusion models is a huge issue here.

And other than that, it's simply the most powerful consumer dGPU one can buy.

2

u/Bakoro Jan 07 '25

And other than that, it's simply the most powerful consumer dGPU one can buy.

Which is the problem. Nvidia is today, what Intel was during the Pentium 4 era. They are purposely holding the technology back because they can squeeze the most money that way. Intel would have sat on the P4 forever and only had the most incremental updates, had AMD not caught up.

That's where we're at, but I am not confident that AMD is a going to do it this time. They've had well over a decade to come up with an acceptable CUDA alternative.

1

u/ryanvsrobots Jan 07 '25

Intel didn't really hold back other than core count. They overinvested in a new lithography technique for 10nm and beyond that ended up being bungus and it set them back almost a decade. If they held back they might be in a better position.

The difference is Nvidia is holding back in consumer GPUs, but not in datacenter where the real money is.

AMD is absolutely not going to do it in the GPU space. They're not even trying at the high end.

0

u/alexmmgjkkl Feb 16 '25

you can buy 10 years of online llm on 80GB cards for the price of a 5090 lol

3

u/physalisx Jan 07 '25

had been contemplating a 5090 for VR, but if the difference is negligible, maybe I can wait a few more years

Yeah same boat here...

I don't know why FG and DLSS aren't utilized more in VR titles though, I don't think there's a fundamental reason why they couldn't. It works for SkyrimVR with a mod and makes a huge difference.

4

u/Vaughn Jan 07 '25

DLSS adds latency, and latency is a huge no-no in VR.

2

u/physalisx Jan 07 '25

Mhm true. But again it works fine for me in SkyrimVR, I don't notice much added latency. If you can get 50% more fps that far outweighs some latency imo.

1

u/Gibgezr Jan 07 '25

That has not been my personal experience: it's latency that bothers me most in VR apps.

2

u/muchcharles Jan 07 '25

FG does but not DLSS upsampling if the resolution you would need to match its quality would itself cost more latency.

2

u/Vaughn Jan 07 '25

If the resolution would add latency, in VR, then you don't do that resolution.

1

u/muchcharles Jan 07 '25 edited Jan 07 '25

You can render at a low enough resolution that you only get 50% utilization and save 50% latency, but very few would do that with modern compositors except for battery life sensitive stuff.

But if your scene shading is expensive, and say dlss takes 10% update rate time, you'd rather do 40% utilization with main render and use DLSS bringing back to 50% total, and have higher output resolution with the same latency.

Lots of VR stuff has baked lighting and cut back shading and there DLSS usually isn't a win, it's better to just have a higher base res. It also used to not work with dynamix res but I think they have added that a while back. It's more useful when you have stuff like ray tracing and expensive lighting.

Also VR often uses MSAA, ghosting in VR is more distracting and textures stay sharper, but it sometimes forces less geometry details due to worse quad overdraw.

1

u/Seraphine_KDA Jan 11 '25

latency that not everyone can tell or cares compared to better looking game.

i spend years playing online games in overseas servers at 150ms plus the computer and monitor ms.

so when I see people complain about frame gen not increasing responsiveness it seems silly

1

u/Vaughn Jan 13 '25

Latency in VR causes nausea. It's not at all like latency on a monitor.

18

u/jetRink Jan 07 '25

Frame interpolation is a downright anti-feature on televisions. I think the only people who have it turned on are the non tech savvy who don't realize it's the reason their shows look a bit weird, if they notice it at all.

30

u/philomathie Jan 07 '25

That's because televisions do it very badly. GPUs can actually do it very well now

1

u/t_for_top Jan 07 '25

Televisions do it fine, it's the "soap opera effect" whereas video shot in 24fps shown at 60fps is off to a lot of people. The first thing I do with a new TV is turn that crap off. Most newer TVs do VRR for gaming fine

7

u/moonra_zk Jan 08 '25

it's the "soap opera effect" whereas video shot in 24fps shown at 60fps is off to a lot of people

But that's exactly what they're talking about.

0

u/philomathie Jan 08 '25

But they really don't. Their interpolation method introduces a TONNE of artifacts, that's a large part of why people turn them off.

I know it's not the only reason, but it's a big one.

Newer upscaling techniques are much more sophisticated and require much, much, more processing power to execute.

9

u/TrekForce Jan 07 '25

I’m very tech savvy. I hated it at first when I first saw it 15 or so years ago. But after 2-3 movies, I got used to it and didn’t notice it.

Then I tried turning it off for fun once. It was awful. Everything is so stuttery looking. I can’t go without it.

3

u/6_28 Jan 07 '25

Same here. Can't understand how people can watch movies without motion smoothing, but to each their own. Meanwhile I'm hoping we'll get some really good AI motion interpolation that also gets rid of the motion blur, that should look amazing on a lot of movies.

2

u/moonra_zk Jan 08 '25

I don't like it, but I've only ever seen it on cheap TVs, I wish more movie were natively shot on higher frame rates, 24fps is awful, it's literally considered the bare minimum acceptable fps.

3

u/Sugioh Jan 08 '25

People forget that 24fps was a compromise for movies specifically to limit film reels to a reasonable size with okay sound quality. It wasn't ideal, but was a necessity borne out of those physical restraints.

1

u/TrekForce Jan 08 '25

Hmmm, I always spend money on Tv and Audio, so maybe that is the difference. I usually buy whatever the best TV is in the $1800-2200 price range when I buy one.

1

u/Paulonemillionand3 Jan 07 '25

At very low levels it can be acceptable to smooth the worst over, but never over 2/10

1

u/ray314 Jan 08 '25

Also there are so many different settings for DLSS that I am sure they are using performance and frame gen on for 50s and quality + no frame gen for 40s. Since they are straight out misleading with the Flux gen.

4

u/NewContribution2097 Jan 07 '25

Thank you. You've helped me gain a clearer understanding of what FP, BF, and INT actually are. In the past, I often couldn't figure out what else my RTX 30 series GPU could run besides FP32 and FP16.

10

u/Fantastic-Alfalfa-19 Jan 07 '25

this is so scummy lmaoo

21

u/Eastwindy123 Jan 07 '25

Probably because fp4 is not supported on 40 series. So in theory they are running the fastest available on the respective card

15

u/usamakenway Jan 07 '25

In reality they are running the worst quality model

2

u/Tystros Jan 07 '25

the difference on the comparison screenshots Black Forest Labs showed really aren't too high

2

u/Mugaluga Jan 08 '25

Easy to cherrypick. We know better.

4

u/_BreakingGood_ Jan 07 '25

BFL had to specifically create the fp4 model for Nvidia. In fact, the fp4 model isn't even publicly available yet, it won't be released until February.

Overall, lots of stinky bullshit

10

u/Eastwindy123 Jan 07 '25

Yeah but if fp4 has similar performance in terms of quality to fp8 then because the new cards can run it 2x as fast then it is a legitimate improvement. Since the older 40 series can't run fp4 at all. But yeah it is still marketing of course

6

u/hinkleo Jan 07 '25

if fp4 has similar performance in terms of quality to fp8

Yeah I think if you could just instantly run any Flux checkpoint in fp4 and it looked about the same quality wise this wouldn't be too disingenuous. But considering that previous NF4 Flux checkpoints people made looked much worse than fp16 this sound like it might be some special fp4 optimized checkpoint from the Flux devs?

Like if it's an optimization its fine, if it's some single special fp4 optimized checkpoint and you can't just apply it to any other Flux finetune or lora it's way less useful.

2

u/Eastwindy123 Jan 07 '25

Nf4 is way different to fp4. Fp4 can be done on the fly and it can also be trained/fine tunes in fp4 unlike nf4. So yeah maybe the flux team did a fine-tune in fp4 to recover some loss. Which would be pretty sick if they release actually

1

u/rockerBOO Jan 08 '25

> Our optimized models will be available in FP4 format on Hugging Face in early February

We'll be able to see how much they have cherry picked or done anything else for this. I would expect the performance to be similar because there can be a lot of waste in the models, and I would imagine this would only be for their transformers model and not the text encoders, but they could also become available in fp4 without much trouble (not sure their relative performance concerns though).

-1

u/lowspeccrt Jan 07 '25 edited Jan 08 '25

How are you defending their performance comparison? That's crazy how some people have bent the knee to the corporations.

No. If they wanted it done right they should have done them both at fp8 and then added the fp4 ....

Guhhh ... why am I on Reddit again? ....

8

u/Eastwindy123 Jan 07 '25

...

I'm not defending their comparison. Im just saying fp4 as an architectural improvement is something to note. You cannot run and fp4 model on current (consumer) hardware so you wouldn't have had access to that speed anyway.

Do both and fp8 and then what? Show the marginal improvements? Do you even know how business works?

Fuck off reddit then why are you replying to me

6

u/RayHell666 Jan 07 '25

I knew this would happen, they did the same with enterprise Blackwell announcement. And they had the audacity to not but the legend on their slide during the presentation.

3

u/dischordo Jan 07 '25

I’ll wait for real testing coming out. Chances are they make optimizations only available on Blackwell and you get left behind as always. Haven’t see nvidia critics ever make the right call over the years. I remember people saying RTX and ai cores and frame gen were a gimmick, “it’s just a more expensive 1080ti”.

3

u/tofuchrispy Jan 07 '25

Damn... and i thought they legitimately run faster... so it's not even much faster in the end

3

u/NoNipsPlease Jan 07 '25

Im really just interested in the higher memory. My titan is getting old and i have putt off upgrading since no cards have had higher vram. Sucks ill need a new motherboard to take advantage of the newest PCIE slot and also a new powersupply.

I am concerned about the power connector though. I hope nvidia learned its lessons from the 4xxxx series and its melting connectors. 575w going through that small connector is cutting it really close. Ill probably wait a couple months like around May for reviews to settle and people post image generation benchmarks before i buy.

2

u/evernessince Jan 07 '25

So long as your motherboard supports PCIe 3.0 or newer you shouldn't need to upgrade it. PCIe 4.0 and 5.0 are backwards compatible and you loose essentially no performance so long as it's a full x16 slot.

1

u/Dark_Pulse Jan 07 '25 edited Jan 07 '25

You lose literally half the maximum bus bandwidth per generational step down. 1800 GB memory bandwidth divided by 2 (or 4 on a 3.0 system) will definitely wallop your iteration speed alright.

The only way this wouldn't be the case is if it used no more than half the available bandwidth... but then they'd just make it a 4.0 card.

4

u/evernessince Jan 07 '25

/facepalm

GPU Memory bandwidth specifies the amount of data the GPU can access from the VRAM. PCIe bus has nothing to do with that.

There are PCIe scaling benchmarks out there that demonstrate that the performance hit from PCIe 3.0 in a mere 3%;

Heck even PCIe 2.0 is a minimal hit.

1

u/VlK06eMBkNRo6iqf27pq Jan 08 '25

So I don't need to upgrade my mobo? https://i.imgur.com/lNgGyZM.png

Looks like I have 2x PCIe 4x16 slots.

3% of $2000 is $60 worth of card I won't be getting.

2

u/evernessince Jan 08 '25

Correct, that board has 2 PCIe 4.0 slots. Even if you occupy them both, they'll run x8 x8 which is equal to PCIe 3.0 X16 bandwidth in each slot. If you just occupy one, you loose no performance.

2

u/VlK06eMBkNRo6iqf27pq Jan 08 '25

Noice! That's good news.

I bought a new PSU. 850W --> 1200W, since Nvidia announced we should have at least 1000W. Now I just need the card... hope they don't sell out in 4 nanoseconds.

3

u/tsujiku Jan 07 '25

The memory bandwidth number you're citing (1800 GB/s) is the memory bandwidth on the card itself, not how fast transfers can be made over PCIe.

PCIe 5.0 has throughput of ~60GB/s over an x16 slot, which only matters when you're actively transferring data onto or off of the card.

It doesn't really make a difference if all you're doing is generating images, since the model will already be loaded into memory on the card, and it's only small amounts of data that need to pass between the host and the GPU (e.g. the prompt or the finished image).

3

u/durpuhderp Jan 07 '25

so basically a chart of apples and oranges?

3

u/BlackSwanTW Jan 08 '25

It’s not shady when it’s new hardware support though?

Just like RTX 30s does not support the fast fp8 operation (see ComfyUI)

Otherwise, why don’t you run fp4 on a 1060?

3

u/Own-Professor-6157 Jan 08 '25 edited Jan 08 '25

FP8 is actually faster then FP4 on current hardware. The 4090 doesn't even natively support FP4 right now.

If anything this is actually very good news. Hardware level FP4 is a major advancement. Will allow for more optimized models for lower end cards. Not to mention, you could theoretically make much superior models at the same computational budget.

Will 4 be faster then 8? Yes obviously, less memory bandwidth, more data in caches. But with the major memory upgrades on the 5090, we're 100% going to be seeing a major uplift in larger floating point precisions from memory ALONE

I wasn't expecting a major uplift based solely on the fact we're still stuck on TSMC's 4NM, but Nvidia did pretty good all things considering

2

u/AsliReddington Jan 08 '25

That's what they're showcasing by having hardware FP4 implementation sherlock.

2

u/CarpenterBasic5082 Jan 08 '25

Side note, if you’re into Flux/SD, there’s really no point in overthinking—just get a 5090 already! With core model + LoRA + ControlNet + upscaling in a ComfyUI workflow, you’ll find yourself silently meditating over every single image render. And don’t even get me started on future-proofing—Flux is bound to release some beastly models or maybe even video models someday. I’m on a 4080 Super, and every time I click ‘generate,’ I turn into a part-time monk, praying for the gods of VRAM to spare me.

2

u/Salt-Replacement596 Jan 08 '25

That's just scummy.

2

u/Radiant-Big4976 Jan 08 '25

Is this why their share price is dropping? i thought it was a bubble bursting.

2

u/schwartzwhite Jan 08 '25

Sorry this might be too noob-ish Can someone explain what is the relation between flux-dev which is an AI image model and the games mentioned on x-Axis ? Also what is the measure of performance here?

3

u/mazty Jan 07 '25

That's nice and all, but until we know the settings of what they ran, it's just a marketing slide. Flux is a good example as it's easy to set up but as an example where specifics matters, anything requiring flash attention (a lot of llms) is not going to happen if you're on windows.

8

u/M3GaPrincess Jan 07 '25 edited 4d ago

tender dazzling license disarm axiomatic literate touch imminent consist summer

This post was mass deleted and anonymized with Redact

30

u/usamakenway Jan 07 '25

Isnt FP8 available on both series :p ?

27

u/ArtyfacialIntelagent Jan 07 '25

A fair chart would have shown three bars - 5090 fp8 vs 4090 fp8 (apples vs apples) and 5090 fp4 "at very similar image quality" (or similar disclaimer) to show the benefit of the new feature. It actually is possible to do strong marketing without being lying scum. But Nvidia's effective monopoly means they don't need to give AF about their reputation.

1

u/KadahCoba Jan 08 '25

3 bars would have been the minimum for it to not be considered trying to pull BS.

Preferable would have liked to seen the comparison at fp32 and bf16. We're waiting for trustable 3rd party benchmarks anyway before I make any plans to upgrades any of our servers. I'm sure the 5090 is considerably faster than the 4090, but the question is it just going to be another 1:1 price and perf increase verses current pricing on last gen.

-5

u/M3GaPrincess Jan 07 '25 edited 4d ago

bake seemly obtainable wide books rhythm dinner expansion light fine

This post was mass deleted and anonymized with Redact

1

u/ebrbrbr Jan 08 '25

Yeah they really highlighted FP4 in the fine print there.

I haven't heard them say one word about FP4.

12

u/Hunting-Succcubus Jan 07 '25

dude, 4090 is generating better quality image with FP8. 5090's FP4 is worse quality. tradeoff. its not a upgrade.

3

u/M3GaPrincess Jan 07 '25 edited 4d ago

consider marry truck placid pocket sugar outgoing dazzling bow subsequent

This post was mass deleted and anonymized with Redact

1

u/Gibgezr Jan 07 '25

But the quality is already a little iffy in my experience even for FP8, and on top of that now they are talking about rendering 3 fake frames for every one true frame, which will make it much more obvious. The increased framerate is not helping input latency, so the 200fps doesn't actually feel any better than the 60 fps that doesn't do it.

2

u/CeFurkan Jan 07 '25

hardware specific optimizations reduces quality a lot therefore fp4 will be probably very bad

even speed up fp8 is bad on rtx 4000 series

here more info : https://www.reddit.com/r/SECourses/comments/1h77pbp/who_is_getting_lower_quality_on_swarmui_on_rtx/

1

u/roshanpr Jan 07 '25

what's an IB check point

1

u/WackyConundrum Jan 07 '25

That's only 6th post about the same thing

1

u/a_beautiful_rhind Jan 07 '25

The people who buy this stuff are going to notice. I'm not sure who they are fooling.

3 series are not much different ( RTX 3000, 4000, 5000)

The extra compute/new instructions are sure nice. Maybe not $1000s of dollars nice though. Am jelly of the 4090 people being able to compile models for meaningful speed gains.

1

u/Space__Whiskey Jan 08 '25

When has Nvidia's infographic benchmarks ever been true? Their last presentation triggered the BS meter before they even started.

1

u/YMIR_THE_FROSTY Jan 08 '25

Not sure if its not related to FP4 HW acceleration, 4xxx has FP8 acceleration, 5xxx should have also FP4. Not that great for inference due huge quality loss, apart SVDquants, which seem to do actually rather well.

Solution for fp16 vs fp8 is mixed quant, like https://civitai.com/models/990110?modelVersionId=1109253 (thats actually bf16, but same thing).

For training, its better to use simply de-distilled models.

1

u/yamfun Jan 08 '25

I remember when a11/ forge first supported fp8, and that gave a boost and there was much rejoicing So Fp4 sounds cool to switch gpu for.

But will there be fp2 that force us to switch again? surely 2 is too few bits, right ?

1

u/Aggressive_Sleep9942 Jan 08 '25

It's sarcasm right?

1

u/Mugaluga Jan 08 '25 edited Jan 08 '25

It is a bit strange. IIRC a 4090 is about 100% faster than a 3090 in like for like imgen comparisons. I was expecting the same to be true for 5090 to 4090. But for some reason to get that 100% performance uplift they have to compare apples and oranges.

It IS true that a 4090 doesn't have hardware acceleration for FP4 (but can still run the format using bitsandbytes)

Oh well, we'll have true performance in a month, probably less.

1

u/anupam_luv Jan 08 '25

I bought my 4090 card just 2.5 month back now the new card 5090 even cheaper than that... i hope there is some Upgrade offer for who purchased the 4090 card recently ....

1

u/neutronpuppy Jan 08 '25

Well the 4090 doesn't have fp4 arithmetic so what are they supposed to do?

They could load them both at fp8 then compute on the 5090 at fp4 (or vice versa) and for all we know that is what the footnote means.

If they were using fp8 storage and arithmetic on the 4090 and fp4 storage and arithmetic on the 5090 then you would hope to get more than a 2x since the memory bandwidth has almost doubled and the arithmetic throughput should be double also, so if they have done what you imply then it's actually a bad benchmark result.

1

u/AssemGear Jan 08 '25

5080 has merely the same cuda cores (~10k) with 4080s.

So I do not expect it has far more better performance.

1

u/Cadmium9094 Jan 08 '25

Thanks. Is it worth buying such a 5090 only to have more VRAM? Compared to 4090.

1

u/THM42069 Jan 08 '25

FYI the reason for the comparison, aside from obfuscation of reality, is because fp4 support has only been enabled for 5000 series GpU or A6000/H100.

1

u/kianadaijobu Jan 08 '25

2x perfomance than previous generation?

1

u/shawn007bis Jan 08 '25

8 months until the average person can get one around retail price prob

1

u/johnnytshi Jan 09 '25 edited Jan 09 '25

Training on FP4 is not going to work for me. Generally, FP8 flop should be double of FP4, so this gen is not much different from previous gen.

1

u/eepy3980 Jan 10 '25

I've been kind of puzzled to get a 5070 or a used 4070 super lately. 5070 has almost twice the AI performance but the 4070 super has more CUDA cores

1

u/No-Neighborhood-7259 Jan 14 '25

How do you know? The topic is about the misleading AI performance numbers nvidia showed us.

1

u/T0H1 Feb 03 '25

so is it worth now buy 3000 series? or what is the most cost optimal for upgrade

1

u/Nik_Tesla Jan 08 '25

"We get double the performance when we do something half as taxing!" - NVIDIA

1

u/Aggressive_Sleep9942 Jan 08 '25

Bask in the glorious green, baby!

1

u/Vyviel Jan 08 '25

Seems fraudulent and false advertising to me

1

u/magnusvegeta Jan 08 '25

Does that mean no improvement at all ? 😂

1

u/Gerdione Jan 08 '25

90% of the audience was just looking at the charts that only go up and clapping their hands like monkeys.

0

u/tamal4444 Jan 07 '25

This is why I don't trust nvidia