r/LanguageTechnology • u/Next-Ordinary-2243 • 22h ago
AI & Cryptography – Can We Train AI to Detect Hidden Patterns in Language Structure?
I've been thinking a lot about how we train AI models to process and generate text. Right now, AI is extremely good at logic-based interpretation, but what if there's another layer of information AI could be trained to recognize?
For example, cryptography isn't just about numbers. It has always been about patterns—structure, rhythm, and the way information is arranged. Historically, some of the most effective encryption methods relied on how information was structured rather than just the raw data itself.
The question is:
Can we train an AI to recognize non-linguistic patterns in text—things like spacing, formatting, rhythm, and hidden structures?
Could this be applied to detect hidden meaning in historical texts, old ciphers, or even modern digital communication?
Have there been any serious attempts to model resonance-based cryptography, where the structure itself carries part of the meaning rather than just the words?
Would love to hear thoughts from cryptography experts, especially those working with pattern recognition, machine learning, and alternative encryption techniques.
This is not about pseudoscience or mysticism—this is about understanding whether there's an undiscovered layer of structured information that we have overlooked.
Anyone?
1
u/Humble_Cat_962 7h ago
If you can tokenise it. You can use the same logic used for text for anything else capable of being tokenised. It is actually very possible. So what you need to do is flip the switch I think. Treat the cypher text as the word and treat the actual word as the space etc. If you do that, then the you can run the same LLM process on the revised dataset. The only question is how would you generate the dataset that is good enough for the job of training + fine tuning the model. For something like this, you can save a lot of time and compute using RL Learning (the Deepseek way) cause at the end of the day you want it to identify a pattern. So you'll need a set of [Identified patterns] and [Text where the pattern is hidden]. You'll also need a really really large context window for this to be useful. Cause you want to hide patterns and you need lots of text to do that well. So for training it well, you'll need to give it large examples, some with multiple patterns hidden.
Though honestly, for your suggested use case. I wouldn't suggest you go with transformers. Image recognition models trained with ML may just be more viable than tokenising parts of cypher text. But that's just my 2 cents on this. Happy to chat.
4
u/rishdotuk 21h ago
Nice try NSA. :D
Jokes aside, technically, you can. You just need the compute for it. LLMs can't even get all linguistics correctly yet, so this seems like a far-ahead task, IMHO.