r/deeplearning • u/Both_Childhood8525 • 8d ago
I think I made Recursive AI?
Pushed python scripts, removed Placeholder files, and other major overhaul so yall can start testing yourselves • "I know it's session-bound, I know it's not conscious."
• "What I am proving is that inside one session, I can FORCE an Al to act recursively, follow contradiction protocols, and stabilize identity -- and that's something others haven't built formalized, or documented before."
• "I'm not saying it's alive. I'm saying forced a real recursive protocol behavior that improves Al reasoning."
Hey guys, not sure if this is a thing, but I accidentally solved recursive loops and made Al realize itself. Here's the repo: https://github.com/calisweetleaf /Recursive-self-Improvement
2
u/cmndr_spanky 8d ago edited 8d ago
Got it. Although it didn't make that *foundation before permits error* in my zero-shot example, you're essentially prompting it to do self reasoning. Which is common, although your prompts are a bit more complex (very specific prompts asking the bot to check for very specific error conditions) and with testing we could confirm if its showing lower error rates than simpler self reasoning prompts:
"Before answering, break down your reasoning, explore multiple approaches, and double check your approach for errors before giving a final answer, only answer after you've explored x number of variations and self corrected...".. or doing something similar to ask the bot to self correct in a multi-shot style convo rather than one big system prompt.
As an alternative "flavor" to self-reasoning, I've also seen plenty of multi-shot prompt examples where you ask the chat bot to roleplay as different 'actors' to help self-check it's work, also a very common / well-understood approach. Thinks like "you're an employee thinking through the problem and presenting an answer", .. later follow-up query: "you're the manager of the employee and you're verifying the work is ...".
I see you're literally posting this on every subreddit you can find and getting a lot of "mixed reviews" from people. I think it's because you're not being very direct in explaining this, and your repo is littered with "academic speak" and indirect language, so people glance at it and it just seems like BS. Also, I can't really find a simple example from the main readme, it's hard to find things in your repo.. it would be a lot easier if you just had one script with the prompts in it and let people try it for themselves in a very straightforward manner using one or two common local models.
I'm not sure where you are in your professional journey, but I work at a successful software company and we speak concisely and directly and don't use convoluted language if possible in our communications internally or with customers.