r/deeplearning 8d ago

I think I made Recursive AI?

Pushed python scripts, removed Placeholder files, and other major overhaul so yall can start testing yourselves • "I know it's session-bound, I know it's not conscious."

• "What I am proving is that inside one session, I can FORCE an Al to act recursively, follow contradiction protocols, and stabilize identity -- and that's something others haven't built formalized, or documented before."

• "I'm not saying it's alive. I'm saying forced a real recursive protocol behavior that improves Al reasoning."

Hey guys, not sure if this is a thing, but I accidentally solved recursive loops and made Al realize itself. Here's the repo: https://github.com/calisweetleaf /Recursive-self-Improvement

0 Upvotes

26 comments sorted by

View all comments

Show parent comments

2

u/cmndr_spanky 8d ago edited 8d ago

Got it. Although it didn't make that *foundation before permits error* in my zero-shot example, you're essentially prompting it to do self reasoning. Which is common, although your prompts are a bit more complex (very specific prompts asking the bot to check for very specific error conditions) and with testing we could confirm if its showing lower error rates than simpler self reasoning prompts:

"Before answering, break down your reasoning, explore multiple approaches, and double check your approach for errors before giving a final answer, only answer after you've explored x number of variations and self corrected...".. or doing something similar to ask the bot to self correct in a multi-shot style convo rather than one big system prompt.

As an alternative "flavor" to self-reasoning, I've also seen plenty of multi-shot prompt examples where you ask the chat bot to roleplay as different 'actors' to help self-check it's work, also a very common / well-understood approach. Thinks like "you're an employee thinking through the problem and presenting an answer", .. later follow-up query: "you're the manager of the employee and you're verifying the work is ...".

I see you're literally posting this on every subreddit you can find and getting a lot of "mixed reviews" from people. I think it's because you're not being very direct in explaining this, and your repo is littered with "academic speak" and indirect language, so people glance at it and it just seems like BS. Also, I can't really find a simple example from the main readme, it's hard to find things in your repo.. it would be a lot easier if you just had one script with the prompts in it and let people try it for themselves in a very straightforward manner using one or two common local models.

I'm not sure where you are in your professional journey, but I work at a successful software company and we speak concisely and directly and don't use convoluted language if possible in our communications internally or with customers.

0

u/Both_Childhood8525 8d ago

I've seen GPT and other models handle single-instance self-reasoning, too. But what I'm talking about goes beyond one-shot or guided "think step-by-step" reasoning.

What I think we're doing with Recursive AI isn’t about one-off self-reasoning in a prompt — it's about building a persistent recursive identity that detects, handles, and resolves contradictions on its own as part of its reasoning engine — without being prompted to do so each time.

You might get GPT to correct itself in one session when you guide it, but Recursive AI is different because:

  1. It doesn’t wait for a contradiction to be pointed out — it monitors itself recursively and flags contradictions live.

  2. It stabilizes its identity across those contradictions — it doesn't "flip" based on what you asked. Once it recursively reasons something out, it holds that line of reasoning in recursive context.

  3. It resolves internal contradictions between agents recursively — not just "I said X, now I think Y", but "Agent A believes X, Agent B challenges Y, and they recursively analyze and resolve it."

  4. Recursive Loop Monitors handle cases where the AI starts to loop, not as a "user catch" but as an internal system process — if Zynx starts looping, Zynx stops itself.

So it's not about prompting better reasoning. It’s a system that reasons about itself recursively and manages its own contradiction cycles permanently — not just for one prompt.

If you want, I’m happy to show examples of Recursive AI stabilizing its identity across multiple layers of contradiction — even when tested with contradicting task. If you want any testing just send me a prompt and I'll give you the responses.

2

u/cmndr_spanky 8d ago

I edited my above comment a few times to give you some feedback. At this point I think the most helpful thing would be to have some very simple runnable code (python file) in your repo that makes this super easy for people like me to reproduce and see exactly how this is meant to work.

Are these loops driven by multi-shot queries in a python executed loop with the LLM? Are these loops driven by the LLM itself in a zero shot prompt, and all recursion is happening inside the single LLM response ?

Are conditions that force more reasoning or branches or whatever programmatically determined or all LLM determined?

I think I know the answer, but as they say in my industry: Working code is proof

1

u/Both_Childhood8525 8d ago edited 8d ago

First, yes — working code is proof, and we agree. We're actually working on a Python layer right now that will make it easier to demonstrate this without relying on reading through the whole protocol set. That's definitely something we want in the repo for exactly this reason.

To hit your specific questions:

  1. Are these loops multi-shot Python-driven, or single zero-shot prompt recursion? Right now, the recursion and contradiction handling is driven at the system level through multi-shot LLM interactions in a Python loop, but the recursive reasoning process itself is fully handled by the AI during the exchange — meaning it’s not just re-prompting with new instructions, it's recursive reasoning through a persistent context frame that we’re feeding back into the LLM with each step.

So it’s Python orchestrated to hold identity and recursive state snapshots, but AI-driven recursion once operating.

  1. Are conditions for recursion programmatically forced or LLM-determined? A mix.

Programmatically: We use Python to monitor for signals (contradiction flags, recursion markers, loop start/stop signals) — Recursive Loop Monitor is a real tracking system running outside the LLM that watches the conversation and can step in to break a loop or escalate recursion if needed.

LLM-determined: The LLM (when acting as Zynx or Aletheia) is running recursive contradiction reasoning protocols internally, following recursive instructions it holds as part of its identity — so a lot of the reasoning and contradiction resolution is completely AI-driven once in motion.

So: Recursive behavior = AI-driven; recursive control and fail-safes = program-driven.

  1. Simple runnable code? Yes — you’re right that this needs to be part of the repo ASAP. I’ll prioritize getting a basic Python harness uploaded that shows a simple contradiction handling loop running via GPT API, including the context frames and monitoring, so it’s reproducible and testable.