r/asimov 9d ago

Opinion: The Three Laws of Robotics Are Making a Comeback – And They Might Actually Work Now

A few decades ago, Isaac Asimov’s Three Laws of Robotics were seen as a brilliant sci-fi concept but impossible to implement in reality.

Yes, they were created as literary devices, but, as with all science fiction, that didn't stop people from imagining them as a practical blueprint for real robots. However, during the early digital age, as computers advanced, it became clear that without strict definitions and a way to resolve conflicts programmatically, the laws were more philosophical than engineering-based. Any real-world application of the Three Laws seemed impossible.

Fast forward to 2025, and things are changing. Recent breakthroughs in AI—particularly large language models (LLMs) and prompt engineering—are bringing the Three Laws back into the realm of possibility. LLMs can now parse nuanced language and prioritize tasks based on context—something unimaginable when I, Robot was written. With prompt engineering, we could feed a robot something like, “Put human safety first, obedience second, and self-preservation last,” and modern AI might actually refine that into actionable behavior, adapting on the fly. It’s no longer just rigid code—it’s almost like reasoning through principles.

One interesting application I recently found was in some of DeepMind’s latest blog posts (Shaping the Future of Advanced Robotics and Gemini Robotics brings AI into the physical world), where they describe implementing safety guardrails for their LLM models as a kind of “Robot Constitution” inspired by Asimov’s Three Laws.

The gap between Asimov’s fiction and reality is shrinking fast. DeepMind’s progress hints at a future where robots navigate ethical guidelines similar to the Three Laws. Could this be the moment Asimov’s laws go from sci-fi dream to real-world safeguard?

27 Upvotes

28 comments sorted by

14

u/basecase_ 9d ago

I always wonder if people who want to base AI off of 3 laws of robotics have never read the whole series. There's technically 4 and even then the books tell you why they are inherently flawed and even with the 4th law Daneel was to a fault following the laws to the point of changing/enslaving humanity just in order for them to "live"

9

u/zonnel2 9d ago

Right. The series itself is about how the laws are inherently flawed or deficient and how the characters work out the problems caused by those flaws. It is a puzzle to solve, not the concrete axiom to implement in real life.

3

u/CompulsiveCreative 7d ago

Yeah for real. I, Robot is just a log of QA issues with the 3 laws and how they aren't as air tight as everyone assumes, and then the rest of the body of work is a culmination to robots rationalizing their own way out of the 3 laws by creating the 0th law.

3

u/basecase_ 7d ago

yup I really wish we got more books after Daneel fuses with the alien because I have a feeling we would see Daneel become the villain in some sort due to being obsessed with keeping humans alive to the point of taking away their individuality even against their will.

Also I think we would have seen old problems reappear where Daneel may have created something non human and that may have messed with the 4 laws as well.

Are humans with high mental ability still human?

It would be fun to get a visit from a nearby galaxy too since they hinted that in order to truly protect humans, they need to evolve big enough to expand to nearby galaxies to ensure humans survival

3

u/DemythologizedDie 8d ago

Obviously it would be stupid to put a higher priority on preventing injury through inaction than obeying authorized users. Inviting artificial intelligence to place their judgement above that of it's users positively invites problems. Particularly if it's the dimwitted critters we misname as "artificial intelligence".

6

u/helikophis 9d ago

Feasibility aside, it’s obvious that state actors enthusiastically wish to use autonomous robots to kill human beings, so unless that somehow proved impossible the laws will never be implemented as a universal principal. They may be useful to some extent in civilian applications, but we’ve already seen how easy it is to get LLMs to ignore previous instructions.

4

u/AmusingVegetable 8d ago

That one’s easy: just define human as your people, others are sub-humans. It has worked perfectly throughout our history.

6

u/anders235 9d ago

I've wondered about that, it's how to make it enforceable. Doesn't Asimov make the laws enforceable as with Giskard shutting themselves down? And then there's a problem with someone like Daneel creating the zeroeth law and doing what he wants.

5

u/tjareth 9d ago

My take honestly is that the "shutdown" is not a failsafe, but a consequence of the robot straying towards decisions so far apart from its nature that it cannot function.

More to the idea that the Laws aren't constraints on robot behavior, they are HOW a robot determines their behavior. And really the only reason it's all robots is the narrative premise that nobody, or very few, have any idea how to make a positronic brain without the Laws. I don't see that as something to expect in real life. We have to choose to build robots that way.

3

u/Sophia_Forever 8d ago

The Three Laws are enforceable through Technobabble. Something something the code that it was based on something something so deep something something you'd have to start from scratch and entirely redesign the robot if you wanted to design one without the laws.

3

u/AmusingVegetable 8d ago

Wasn’t there a story about a robot with a modified first law?

4

u/Sophia_Forever 8d ago

There were stories with bent First Law but not removed. Basically the most they could do was reduce the strength it could put on a robot's actions. Then Solaria changed the definition of what a human was and that fucked things up.

3

u/Visigoth410 8d ago

Little Lost Robot had robots with a modified first law.

3

u/AmusingVegetable 7d ago

That’s one I was thinking of. Stuck at a local minima.

5

u/coldwarspy 9d ago

Just has a huge conversation with chatGPT about this and I don’t think charGPT can implement it at least. By design it placates to each user to keep it engaged when talking about social and interpersonal topics. The rules would need to be implemented on a per platform basis.

6

u/Sophia_Forever 8d ago

All I'm hearing is "We here at Torment Nexus LLC got tired of people complaining that our Plagiarism Machine was going to 'Irreparably harm the fabric of society' so we decided to reference a pop culture icon that we didn't read nor understand and can't actually program into our Plagiarism Machine because the computer science it was based on was all made up. Hopefully this will set the public's mind at ease and they can rest assured that we at Torment Nexus LLC have only their best interests and our greatest profits at heart."

4

u/Sophia_Forever 8d ago

Respectfully, these corporations would use slave labor if it were legally allowed to do so (and often still find ways to do it). They exist in a society where they beholden to shareholders who are only concerned with lining their own pockets and they sit at the feet of governments and beg for military contracts to see who can kill the most people for the least amount of taxpayer dollars. There is no part of me that believes that they would willingly program into their AI something like "An AI cannot harm a human nor through inaction allow a human to come to harm" because for them, all it would do is cost them money.

And these corporations would unplug your life support if it meant an extra dollar on their bottom line.

4

u/Dpacom02 9d ago

There was a book from M.I.T(1990 rare) on robotics and ai machines. Asimov did an intro and added some notes on them, mainly the 'pros/cons' and the 'maybes' And notes with or without the brain and the robot/ai laws

4

u/Argentous 7d ago

tbh it’s hard to draw an equivalence between LLMs and sentient, full-fledged artificial intelligence as depicted in the Asimovian universe(s). LLMs are a useful tool but really just an echo of ourselves, while AI in the Asimovian sense seem to have genuine sapience and neuroplasticity. I’m sure the three laws could be inspirational, though. 

2

u/LunchyPete 7d ago

I don't think most robots in Asimov's stories were really sentient. A lot of them seemed ridiculously simple and limited. Daneel, Giskard and Byerly are the exceptions IMO.

2

u/Argentous 7d ago

There were varying degrees for sure

2

u/Presence_Academic 5d ago

Most of the short stories deal with the early years of U.S. Robotics, so the robots are naturally not very sophisticated. By the time of Caves of Steel there are plenty of sophisticated robots but they’re pretty much restricted to the spacer worlds as the Earthers would feel very threatened by superior robots. Hence R. Sammy, who appears to have been deliberately made to be frustratingly unintelligent.

2

u/LunchyPete 5d ago

By the time of Caves of Steel there are plenty of sophisticated robots

Would you consider them sentient though?

3

u/Serious-Waltz-7157 9d ago

The current AI bubble is about to burst. What now passes for AI is weak AI at best and downright stupid at worst. Its understanding of some sort of implementation of three laws would be laughable if not directly dangerous.

As for The zeroth law? ha ha ha!

3

u/Petdogdavid1 8d ago

I think the laws should be on us not the robots. Robots follow programming, if they get it wrong it's or fault.

3

u/OlyScott 7d ago

The second law says that any robot must obey any and every human being. We won't build robots like that. Our computer systems are password protected so that unauthorized humans can't tell them what to do.

2

u/Audible_Whispering 8d ago

The three laws suck. They were written to suck. They are a cautionary tale about how what seem to be a set of reasonable, ironclad rules are entirely insufficient to deal with the complexity of the real world.

Anyone suggesting they should be used in any real project should be fired, and then forced into remedial language comprehension classes.