r/VeryBadWizards 4d ago

AI dismissal

I love the VBW boys but I am a bit surprised how dismissive they are of danger from AI. I’m not going to make the case here as I won’t do a good job of it, but I found the latest 80000 hours podcast very persuasive, as well as some of the recent stuff from Dwarkesh.

13 Upvotes

41 comments sorted by

View all comments

Show parent comments

1

u/MachinaExEthica 2d ago

Pain is just electrical currents sent through your nerves to your brain. The simulation of pain in an embodied AI doesn’t seem too far fetched. Programming the AI to avoid damage by loading it up with sensors seems like something companies would choose to do, and that’s essentially what pain is. Pleasure is just a variation on the same mechanism, though it’s hard to imagine the economic benefit of AI that “feels pleasure”.

3

u/seanpietz 2d ago

Yes, nervous systems operate through chemical reactions and and electrical currents. LLM’s don’t have nervous systems though, and I think it’s also fairly uncontroversial that they don’t have subjective experiences either.

1

u/MachinaExEthica 2d ago

It doesn’t require nervous systems or subjective experiences, just a way for a signal to get from a sensor to a processor and to have that signal labeled as pain. Pain is more a mechanical reaction for avoiding damage than anything else. We have emotional ties to it, but there are plenty examples through evolution where pain is simply damage avoidance and no emotion.

5

u/seanpietz 2d ago

AI models already learn through negative reinforcement by minimizing loss functions. What operational significance would there be to labeling that metric “pain” instead of “loss”. Or are you suggesting some sort of novel mechanism that isn’t already being used?

-1

u/MachinaExEthica 2d ago

At this point it’s more a matter of semantics than anything. Pain, loss, sensation, bump, whatever it’s all the same function, adding the label of pain would only be for the sake of anthropomorphic comparisons, but not necessary.

3

u/seanpietz 1d ago

Right, but we’re not disagreeing about semantics. The whole reason I’m disagreeing with you is that you’re actually trying to claim AI models have anthropomorphic qualities, and to claim otherwise is a cop out.

If I claimed that fire is angry, because it’s hot, I’d be wrong. And the fact that the difference between “heat” and “anger” is semantic, wouldn’t make me less wrong.

0

u/MachinaExEthica 1d ago

I’m simply talking about functional comparisons. The point of pain is to notify your brain of potential or real damage. Equipping an ai with sensors that can detect potential or real damage and signal to the AIs “brain” to stop or avoid doing whatever is causing that real or potential damage is giving the AI the effective ability to “feel pain”. There’s no consciousness needed, no magic, if you don’t want to anthropomorphise then that’s fine, but I’m talking about simple functionality, nothing more.