

God, that was a bad read. Not only is this person woefully misinformed, they’re complaining about the state of discourse while directly contributing to the problem.
If you’re going to write about tech, at least take some time to have a pasaable understanding of it, not just “I use the product for shits and giggles occasionally.”

I’ll preface this by saying I’m not an expert, and I don’t like to speak authoritatively on things that I’m not an expert in, so it’s possible I’m mistaken. Also I’ve had a drink or two, so that’s not helping, but here we go anyways.
In the article, the author quips on a tweet where they seem to fundamentally misunderstand how LLMs work:
The tweet is correct. The LLM has a snapshot understanding of the internet based on its training data. It’s not what we would generally consider a true index based search.
Training LLMs is a costly and time consuming process, so it’s fundamentally impossible to regenerate an LLM in the same order of magnitude of time it takes to make a simple index.
The author fails to address any of these issues, which suggests to me that they don’t know what they’re talking about.
I suppose I could conceded that an LLM can fulfill a similar role that a search engine traditionally has, but it’d kinda be like saying that a toaster is an oven. They’re both confined boxes which heat food, but good luck if you try to bake 2 pies at once in a toaster.