

reVanced is alive, well, good, and at this point necessary


reVanced is alive, well, good, and at this point necessary
Most skilled engineers, and even mildly skilled engineers don’t use slopgenerators to write code. Some of them use it sometimes to do some menial tasks, although I’m not convinced it actually saves them time. It sure doesn’t every time we measure it.
There is however a plague of low skilled people who convinced themselves that they’ve found a shortcut to being an engineer. Those people are producing bad things at a fast pace, and the only reason we’re not in an unsolvable crisis yet is that their slop isn’t hitting prod very often on account of being bad.
This doesn’t make me uneasy. It makes me resentful, a little angry, and a lot tired. Thanks for bringing it to attention, I will make sure that nothing of that project or from that author will ever cross my ecosystem again.


Don’t listen to what the other guy is saying, it’s all bullshit. His vocabulary betrays this wonabe haxxor with bad ideas about everything and weird choices, and his suggestions are the same.


“Sane” people are exceeding minority. Everyone is couple of good conversations away from failing into some sort of rabbithole from which there is no return. Some people have very easily triggerable schizophrenia, which is more obvious, but nobody is OK and nobody is immune.


You don’t even have to “break” llm into anything. It continues your prompts, making sentences as close to something people will mistake for language as possible. If you give it paranoid request, it will continue with the same language.
The only thing that training gave it is the ability to create sequences of words that resemble sentences.


Will they patch useradd or adduser to support that?


There is a bunch of stuff that can become an alternative, if the users will come.


By now it’s kind of getting clear that fundamentally it’s the best version of the thing that we get. This is a primetime.
For some time, there was a legit question of “if we give it enough data, will there be a qualitative jump”, and as far as we can see right now, we’re way past this jump. Predictive algorithm can form grammatically correct sentences that are related to the context. That’s it, that’s the jump.
Now a bunch of salespeople are trying to convince us that if there was one jump, there necessarily will be others, while there is no real indication of that.


You’re failing into the same trap. When the letters on the screen tell you something, it’s not necessarily the truth. When there is “I’m reasoning” written in a chatbot window, it doesn’t mean that there is a something that’s reasoning.


Your deep insecurity is too on the nose


The golden standard for me, about anything really, is a number of published research from relevant experts that are not affiliated with the entities invested in the outcome of the study, forming some kind of scientific consensus. The question of sentience is a bit of a murky water, so I, as a random programmer, can’t tell you what the exact composition of those experts and their research should be, I suspect it itself is a subject for a study or twelve.
Right now, based on my understanding of the topic, there is a binary sentience/non sentience switch, but there is a gradient after that. I’m not sure we know enough about the topic to understand the gradient before this point, I’m sure it should exist, but since we never actually made one or even confirmed that it’s possible to make one, we don’t know much about it.


That’s the fun thing: burden of proof isn’t on me. You seem to think that if we throw enough numbers at the wall, the resulting mess will become sentient any time now. There is no indication of that. The hypothesis that you operate on seems to be that complexity inevitably leads to not just any emerged phenomenon, but also to a phenomenon that you predicted would emerge. This hypotheses was started exclusively on idea that emerged phenomena exist. We spent significant amount of time running world-wide experiment on it, and the conclusion so far, if we peel the marketing bullshit away, is that if we spend all the computation power in the world on crunching all the data in the world, the autocomplete will get marginally better in some specific cases. And also that humans are idiots and will anthropomorphize anything, but that’s a given.
It doesn’t mean this emergent leap is impossible, but mainly because you can’t really prove the negative. But we’re no closer to understanding the phenomenon of agency than we were hundred years ago.


You’re attributing a lot of agency to the fancy autocomplete, and that’s big part of the overall problem.
Watching youtube in a mobile browser works, but is a completely separate form of torture