

Basically just host a blog and on it say outrageous things about something obscure (such as yourself) and wait for it to be picked up.


Basically just host a blog and on it say outrageous things about something obscure (such as yourself) and wait for it to be picked up.


My Lemmy client shows a page summary (guess it’s in the header or something):
I found a way to make AI tell you lies – and I’m not the only one.
My immediate response is: Yes of course, just ask it questions.
The actual article is interesting though. They mean poisoning the data it scrapes intentionally and super easily.


Why would someone direct the output of an LLM to a terminal on its own machine like that? That just sounds like an invitation to an ordinary disaster with all the ‘rm -rf’ content on the Internet (aka training data). That still wouldn’t be access on a second machine though, and also even if it could make a copy, it would be an exact copy, or an incomplete (broken) copy. There’s no reasonable way it could ‘mutate’ and still work using terminal commands.
And to be a meme requires minds. There were no humans or other minds in my analogy. Nor in your question.


If you know that it’s fancy autocomplete then why do you think it could “copy itself”?
The output of an LLM is a different thing from the model itself. The output is a stream of tokens. It doesn’t have access to the file systems it runs on, and certainly not the LLM’s own compiled binaries (or even less source code) - it doesn’t have access to the LLM’s weights either. (Of course it would hallucinate that it does if asked)
This is like worrying that the music coming from a player piano might copy itself to another piano.
Cherenkov radiation
You know sonic booms? This is basically like ‘optical booms’ from radiation in water that’s traveling faster than the speed-of-light-in-water (which is less than the speed of light in a vacuum)