This is important for cybersecurity purposes, as it means you can predict patterns in certain vibe “software” and system-related decisions.
If you’re lazy enough to don’t do it the proper way, you’re lazy enough to use the wrong randomizer.
An LLM is a statistical model that predicts what makes sense to go next. If its training data usually indicate that tokens like that should come next, it sends them.
I have trouble finding these studies again, but I saw a study with some nice graphs illustrating human and animal behavior is generally very divergent: more people, more systems in place to accomplish goals, humans will branch out automatically into different areas.
Another study showed that as training continues that LLMs and other Neural Networks become convergent. This accelerates as constraints are added. More evidence on the pile that they can never approach any real AI with the current models.
To be fair, humans tend to pick similarly. They avoid picking rounded numbers and pick numbers like7/3 as they feel most random.
True randomness is jarring to people, as itunes find out. People complained about the ’ random’ shuffle function as it sometimes played a certain artist/saying multiple times in a row. So they made an algorithm that took account of what was already played, to make it feel more random
Isn’t that a sign of intelligence?
No.
Forgot your “/s” here…




