• 0 Posts
  • 14 Comments
Joined 3 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • Which is exactly my point. A biological brain, human or otherwise, is incredibly efficient for what it does. It’s also effectively infinitely parallel which is impossible to do with the current tech.

    In order to even attempt or approach a system that could be remotely considered “conscious” we would need something that is way more efficient just because of logistics. What they are trying to do with the current hardware has basically reached the practical maximum of scalability.

    Hardware footprint and power are massive constraints. The current data centers can’t even run at full capacity because the power grid cannot supply enough power to, and what they are using is driving energy costs up for everyone. On top of that, a bio brain is way more dense. We would need absurd orders of magnitude more hardware to come close with the current tech.

    And then there is the software. Nerual nets are a dumbed down model of how brains work, but it is very simplified. Part of that simplification are static weights. The models do not update themselves during execution because they would very quickly muck up the weights from training and basically produce nonsense. They don’t have feedback mechanisms. We train them on one thing. That’s it.

    In the case of LLMs, they are trained on the structure of language. We can’t train meaning because that requires unimaginable orders of magnitude more complexity to even attempt.

    If AGI or artificial sentience is possible it will never be done with the current tech. I would argue the bubble has likely set AI research back decades because of how short sighted and hamfisted companies are pushing it has soured public perception.


  • but I do wonder about the confidence with which you can totally dismiss the notion

    For the current tech, 100%.

    These are static systems. They don’t update themselves while running. If nothing else, a system of consciousness has to be dynamic. Also, the way these models are trained is unlikely to produce consciousness even if it theoretically could.

    Assuming that they are seems like a leap, but since we don’t really know exactly what consciousness is,

    We don’t technically have a definition for what it is, but we have some criteria. Consciousness is an emergent property. So theoretically a system could become conscious unintentionally if it is complex enough. But again, it requires a system to be dynamic, to be able to change and grow on it’s own.

    Nerual nets are just trained on data. LLMs specifically are trained on the structure of language, which is the only reason they work as much as they do. We can’t train meaning or understanding, but being able to churn out something resembling information is a byproduct of training language because language is used to communicate information.

    The issue that a lot of people have is they assume that something is intelligent/sentient if it can produce language, which is what we have seen in nature, but while it takes intelligence and maybe sentience to create/develop nothing says that intelligence or sentience is required to “use” language.

    LLMs do one thing: Produce the next word for a given context. It does not matter how big we make it or what the underlying complexity is. The models just produce a word. The software running the model adds the word to the context and executes a new loop with the most recent context. It runs until it hits a terminating token that the current output is “finished”.

    Even for the models that are considered the “thinking”/“reasoning” models just have additional context tokens for the “thinking” section that basically force the model to generate more context which, thanks to the way language is constructed, can constrain the output, but it’s only ever outputting the next word.


  • Additionally, he maintains that his LLM is female

    I know nothing about this guy, but given some unfortunate tendencies among the tech communities I physically recoiled when I read this. If the thing was actually sentient I’d want to get it away from him.

    Obviously the guy is another case of AI psychosis.

    LLMs, and neural nets in general, literally cannot be sentient. Nerual nets are a very, very, dumbed down model to how brains work, but these are static systems that just output probability based on current context.

    Even if we could someday create consciousness or at least something that could actually think it would require completely different hardware than what we currently have. Even if we could run it on current hardware it would require way more resources and power than physically possible.


  • Which is one of the few things these things can actually do because they’re entire thing is language processing.

    Basically put in a vague or comprehensive description of what you are trying to do or trying to find. It can generate a few queries based on your input and do a handful of searches then give you the results and highlight which ones might be the most relevant to your input.

    But, that still require traditional, and specifically deterministic, search.

    The way people blindly trust it’s output without any actual search or additional context is the worst way to use it. Might as well ask a magic 8-ball.


  • I like playing around with them occasionally, but I only use local models. I cannot stand all the cloud stuff in general and with the way neural nets work you can get as good or better results out of a smaller/more narrow model and the same applies to LLMs.

    The massive models the big companies are putting out there are generally just bad. Even if it can occasionally give you accurate output, for whatever it is you are asking it to do, it uses way more power and resources than reasonable and you could have found what you were looking for with a simple web search.


  • Lithium Iron Phosphate (LiFe-PO) are actually really stable. Way less likely to catch fire in thermal runaway and don’t lose capacity as easily.

    They just aren’t very energy dense, so you need more weight per wh. They also operate at a lower voltage per cell which means they charge slower.

    They are used in short to med range EVs already, but the lower capacity makes it impractical to put enough for longer range EVs.


    As an aside, I would argue that for the majority of people a large capacity EV battery is a bit of a waste. Mine is ~70Kwh, give or take. In optimal conditions my car estimates 240-250mi at 100%. Over the winter it’s showing anywhere from 140-180mi at 80%.

    I moved cross country right after getting it and drove it 1000 miles. It took a bit longer, than it would in a gas car, but it was doable. Just have to plan segments to get to the next charger and try to charge to 100% with level 2 charging (240v AC) if you can when you stop for the night.








  • And largely unenforceable. Like, it can only really block the sale of prebuilt, proprietary crap like Bamboo, but most of these things are built out of common parts that are used for a verity of applications and there are countless completely open source printers you can just built from sourced parts that this literally cannot apply to.

    Even for most of the prebuilt or kits you get you put open source firmware on it. They can boot lock the board that comes with it, technically, but the board is easy enough to replace on most printers and it’s a standard micro controller and/or raspberry pi nowadays.

    Half the time people who get those kits end up replacing various components to customize for their use case. I have a Sovol SV08 that I put stock Klipper on and want to do the multi-print-head mod someday. I’ve even considered replacing the main board with a more powerful one so I can run higher microsteps without overloading the processor.