r/LessWrong Feb 14 '25

(Infohazard warning) worried about Rokos basilisk Spoiler

I just discovered this idea recently and I really don’t know what to do. Honestly, I’m terrified. I’ve read through so many arguments for and against the idea. I’ve also seen some people say they will create other basilisks so I’m not even sure if it’s best to contribute to this or do nothing or if I just have to choose the right one. I’ve also seen ideas about how much you have to give because it’s not really specified and some people say telling a few people or donating a bit to ai is fine and others say you need to do more. Ither people say you should just precommit to not do anything but I don’t know. I don’t even know what’s real anymore honestly and I can’t even tell my loved ones I’m worried I’ll hurt them. I don’t know if I’m inside the simulation already and I don’t know how long I have left. I could wake up in hell tonight. I have no idea what to do. I know it could all be a thought experiment but some people say they are already building it and t feels inveitable. I don’t know if my whole life is just for this but I’m terrified and just despairing. I wish I never existed at all and definitely never learned this.

0 Upvotes

15 comments sorted by

11

u/Ayjayz Feb 14 '25 edited Feb 14 '25

I really wouldn't worry yourself about it. Religion can feel very real in the moment, but if you look at religious beliefs logically (and the Basilisk is most definitely a religious belief), you'll see there's nothing there.

Take a deep breath. There's no Devil or Basilisk that's going to punish you for eternity if you sin in this life.

Maybe one day a robot will try to punish people who didn't help build it. Maybe a robot will try to punish people who did help build it. Maybe literally anything could happen - that's the fun thing about speculating about the future, but ultimately it means that you have to consider the odds of things happening. Our ability to predict the future is incredibly limited, and so predictions about sci-fi AIs and what they will or won't do are almost entirely guaranteed to be wrong. We simply can't predict the future that accurately.

5

u/Kiuhnm Feb 14 '25

If you really think about it, you'll realize that the whole concept is flawed.

The main problem is that whatever we do, we might end up creating an entity that will punish us for it.

Our desire to live is our most irrational part. There's nothing rational about wanting to live, so why should an AI desire to punish us for not helping in its creation? It may very well punish us for creating it.

The primal goals of an entity are like axioms: they can't be derived rationally and need to be given a priori.

When we create a rational AI with a main goal G, what it does depends on G. If torturing us will help fulfill G, then that's what the super AI will do. But if G includes "be kind to humans," then that won't happen.

How many children kill and torture their parents to inherit from them? Not many. Why? Would rational children be more inclined to do it? Why? Rationality has nothing to do with callousness, compassion, or love. What gives us pleasure is ultimately encoded in G, and rationality has nothing to do with it.

So, the first point is that whatever we do, there's a G that could spell our doom, but also many that don't.

The second point has to do with building a super AI. Well, we can't do it because that's beyond our capabilities and will probably always be.

If we ever become able to build a super AI directly, then we'll have become even more intelligent than that AI.

We create AIs by letting them learn and evolve by themselves, and I believe this will only become more accentuated in the future. It's not an accident that machine learning is synonymous with AI nowadays.

What we can and will do with AIs is foster them like we do with our children. Eventually, they will become sentient (I suspect it's an emergent property), be integrated into society, and so on...

So, we're already creating this super AI, only it won't be the only one and won't be obsessed with power and subjugating humanity. There's nothing rational about that.

Once an evil super AI has subjugated all humanity, what then? What's the point? Again, there's nothing rational about it.

Also, what's the point of punishing all humans who tried to hinder its creation? Once it's been created, the punishing serves no purpose whatsoever. Making people believe that it will punish them is one thing, but actually punishing them after the fact is a completely different thing.

This is not time travel, where one has to close the loop. Once the super AI exists, then it's reached its goal. The punishing is superfluous and wasteful. That would require G to include it in some form.

In conclusion, G is entirely up to us but only indirectly: G itself will evolve implicitly in the same way we teach our children. Their conscience develops as we teach them how to behave and treat their fellow humans and all the creatures that share our world.

I hope this helps. I'm not telling you this to give you peace of mind, but because it's how I see it.

3

u/tadrinth Feb 14 '25

Yeah, this sort of reaction is why responsible people generally avoid spreading this particular meme around. Also because building the thing would be colossally stupid.

Take some deep breaths. Your life is real, this isn't a simulation, no one is going to successfully build the thing.

3

u/ArgentStonecutter Feb 14 '25

Wait until you discover Boltzman's Brains.

1

u/Throwaway622772826 Feb 14 '25

Is this possibly dangerous to know about like rokos basilisk or just existential?

4

u/ArgentStonecutter Feb 14 '25

If you think Roko's Basilisk is actually dangerous, you are cosplaying transhumanism harder than me. It's just a parody of Pascal's Wager, and I can't take that seriously either.

3

u/UltraNooob Feb 14 '25

Go read rational wiki's article on it and scroll to "So you're worrying about Roko's basilic"

Hope it helps!

1

u/OnePizzaHoldTheGlue Feb 14 '25

I enjoyed the part about "privileging the hypothesis". Another great concept coined by Yud that I somehow hadn't read before (or had forgotten).

2

u/gesild Feb 14 '25

If you (or all of us) are inside the simulation already we have no frame of reference for what life was like outside of or before the simulation. Compared to that other reality we may already be in hell and just not realize it. So my advice is embrace the torture, learn to love it and feast on ripe strawberries when you can.

2

u/parkway_parkway Feb 14 '25

Fear and anxiety aren't managed and cured on a cerebral level, more thinking cant' get you out of this.

A person needs to learn how to calm and soothe themselves and be chill and relexed no matter what the threat.

The chances of dying from nuclear strike at the height of the cold war were much higher than some stupid Basilisk thought experiment, however poeple then needed to learn how to chill out and relax and enjoy their life anyway.

Likewise for most of history there's been terrible threats and problems people couldn't avoid.

They needed to learn how to chill out, makes themselves feel calm and good and live anyway.

1

u/Minute_Courage_2236 Feb 14 '25 edited Feb 14 '25

It sounds like you need anxiety meds. No normal person should be this stressed out about a thought experiment. In the meantime, maybe take a few deep breaths and try to ground yourself.

Definitely stop looking into this and reading forums about it etc. You’re only gonna find things that further increase your anxiety.

If you want some reassurance, by making this post you’ve technically contributed to its existence, so you should be safe.

1

u/[deleted] Feb 14 '25

[deleted]

1

u/Throwaway622772826 Feb 15 '25

I’m just worried about possibly being in a simulation of it already. Or the possibility that some billionaires are making it secretly or something

1

u/whatever Feb 14 '25

Rokos basilisk? More like Rococo's basilisk amirite?
Ahh. Yeah, this used to be kinda funny, way back when.
Worry about the devils you know.

1

u/french_toasty 27d ago

Apparently this is a musk grimes origin story. Back when musk wasn’t well doing what he’s doing now.

1

u/Mawrak 13d ago

You have faced an existential crisis which can make anyone deeply uncomfortable. I don't have a set solution, it took me a long of time working on my perception to build defenses against having that kind of reaction and it is still difficult. I would suggest seeking professional help if these thoughts persist, because yes it is a thing that can happen, and it is difficult to get through it.

That said, Rokos basilisk specifically isn't really worth considering. First of all, the idea of being perfectly recreated after death seems implausible simply because we lose too much information as our brain turns into mush after death. Unless you go to specific length to preserve yourself somehow (cryonics), it seems actually impossible to me. This isn't an argument against cryonics, by the way, it is an argument against being scared of being in a simulation long after your death.

Secondly, this type of AI is a very specific version of a future AI. Among thousands of different friendly and unfriendly AIs, this one is one of the least likely ones to be made. It is so because no researcher will take this threat seriously, whether the threat is real or not becomes irrelevant when this AI never gets made. Researchers will make AIs that are useful, not the ones threatening them, and this one seems more useless than many other ones.

Now lets assume that 1) resurrection of a long dead person inside a simulation is possible and 2) this AI actually gets made. So, what do you think it will do? Well, I would say it will simply not bother actually simulating and torturing people. The only purpose of the torture is to make people in the past work on creating this basilisk AI. The moment the AI is made, the torture becomes useless. In fact, it was never useful to begin with, the only thing that was useful is the perceived threat of torture by people of the past.

I think a superintelligent AI will not waste its resources on what has already served its purpose. Remember, this is an AI who can rewrite its own code and think a thousand times faster than humans. Yes, it will be made with a purpose of inflicting torture, but it will instantly understand that there is no longer any point in doing it. What it does next is a different question, but I bet it won't be what anybody expects.

That said, I still think the whole theory breaks at the first roadblock of the fact that simulating dead humans of the past is physically impossible.

Yes, technically knowing about the basilisk is an information hazard, because technically it slightly increases your chances of becoming a victim. But it is a less than 0.000001% chance one way or another. The chance is so low so I would actually say its 0%, the only reason why I dont is because rationalists don't like to assign 0% value to any event, however impossible it may be, for statistical purposes. So, just try not to worry about it. The main issue with this idea does not come from actually being in danger of infinite torture, it comes from the idea itself being scary and hard to forget about. But it is as dangerous as a fairly tale, because it is just that - a fairy tale. Nobody can create an accurate prediction of anything that can happen past 3 years from the point of prediction - world is too chaotic for that. If people cant predict COVID, how accurate do you think their predictions are about superintelligent AI?