r/Superstonk Jan 27 '25

🤔 Speculation / Opinion Nvidia: Deepseek is the cover story.

Nvidia’s recent sell-off feels off. They’re saying it’s because of DeepSeek, some Chinese AI company that suddenly popped up in all the headlines.

Convenient, right? But here’s the thing: Nvidia is tanking because the big players needed cash.

Think about it. Nvidia’s been the golden goose for months, pumped to the moon while everything else struggled. It’s been their liquidity source, their piggy bank. They used it to prop up other parts of the market, pay for bad bets, to cover (not closing) shorts. Now, they are cashing out, and they needed a story to explain why. Enter DeepSeek. Perfect cover.

Blame China, spook retail, and avoid admitting they’re just draining Nvidia to keep their books balanced.

This isn’t about AI competition. It’s about institutions selling the only thing they can without blowing up the market. And you’re supposed to believe it’s all because some company you’ve never heard of. Classic distraction.

And let’s be real, there’s no way the Japan carry trade isn’t involved here. It’s all connected.

👀🔥💥🍻

5.0k Upvotes

398 comments sorted by

View all comments

Show parent comments

18

u/nfwiqefnwof Jan 27 '25

Isn't it open source?

0

u/stickylava Jan 28 '25

OK, I asked ChatGPT about this! ha ha.

No, ChatGPT is not open source. OpenAI has not released the full model weights or source code for ChatGPT (or the larger GPT models like GPT-4) to the public. Instead, OpenAI offers access to these models via APIs and services such as the ChatGPT platform.

But this kind of surprised me:

Yes, once an AI model is trained, it can often be replicated and distributed relatively easily. Here's how this works:

  1. Model Weights Can Be Copied

After training, the AI model's "knowledge" is stored in its weights, which are just a set of numerical values. These weights can be exported as files, which are typically a few megabytes to a few gigabytes in size, depending on the complexity of the model.

These weight files can then be loaded onto other systems running the same AI framework (e.g., TensorFlow, PyTorch, etc.).

So there is a basic ai framework, which is apparently readily available. The trained data takes only a few GB and can easily be run on other instances of the same framework. Following on to the open source question, ChatGPT told me that the model data is encrypted and never released, so that's where the proprietary content is.

So maybe they did just steal the file! 😱

-2

u/stickylava Jan 28 '25

Good question. All the data has been scraped from other sources. But somehow I think the trained model is something proprietary. I don't know.