That would involve a lot of trust in the AI and it’s training materials
I love how all these tech bros seem to forget that Capitalism exists, whenever they’re babbling on about how AI is going to “solve everything”. They conveniently forget that 99% of us are not going to be receiving any benefit from the monetization of this technology.
It’s because they don’t see 99% of us as people.
Well it all depends on your perspective of what “everything” is that needs solving.
One good thing AI bubble did: it showed me we have so many rich idiots that it is an actual problem
His 2014 book Superintelligence was an early examination of AI’s existential risk. One memorable thought experiment: An AI tasked with making paper clips winds up destroying humanity because all those resource-needy people are an impediment to paper clip production.
Good thing we have this philosopher to have the most superficial thoughts about AI while he poops. His second book now seems to be along the lines of “guys, AI will fix everything”. What a great follow up to AI will destroy everything. Top twist.
The paperclip maximizer is a thought experiment. That’s all. It’s an overly simplistic way to explain the gist of a more complex idea. The fact that even this basic thought experiment goes over people’s heads just further proves why that simplification was needed in the first place.
The paperclip maximizer is just capitalism, and has been practiced by people for centuries. Of course an AI taught by capitalists would replicate that behavior.
Your comment would be more convincing if you laid out the complex idea you’re alluding to, instead of saying that a simple example is all people need.
As far as I can tell, thought scientists stay losing, because pretending your thoughts comprise a form of science that ends in a measurable result is sophistry.
It’s to illustrate the alignment problem. What you literally ask isn’t always what you actually want. This is usually obvious to humans but not necessarily to an AI. If you sit in a self-driving car and tell it to take you to the airport as fast as possible, you might arrive three minutes later covered in vomit with the entire police department after you. That’s obviously not what you wanted, but you got exactly what you asked for.
The paperclip maximizer is a cartoon example of this. If you just ask it to make as many paperclips as possible, that becomes its priority number one and everything gets turned into paperclips and you might not get the chance to tell it this isn’t what you meant.
A kind of real-life example is the story of a city that started paying people for rat tails to eradicate the rat population, only for folks to start breeding rats instead to make money. It’s a classic case of unintended results due to unspecific requirements.
the story of a city that started paying people for rat tails to eradicate the rat population, only for folks to start breeding rats instead to make money.
Or the real life story of the US elementary school students who saved up money to buy and then free slaves, which - when examined closer - was found to be driving growth in the slave trade, not slowing it down.
In both cases - you figure out what’s off kilter, and you stop doing that.
It’s a lot easier to turn off “AI machines” than, for instance, powerful industries like Oil and gas…
you might not get the chance to tell it this isn’t what you meant.
And that is where the thought experiment left the tracks - lifted off with escape velocity and is now passing Voyager 2…
In what cartoon world do we not get a chance to shut off the Doomsday Device? I mean, it was a funny little twist at the end of Dr. Strangelove, but as realistic as many elements of that story were, that was not one of them.
Alignment is undecidable, so no point wasting synapseseconds.
It’s not a matter to decide but a problem to try and solve. In most cases we get to learn from our mistakes but when it comes to AGI we might not.
Or are you suggesting we shouldn’t even think about it but rather just roll the dice and see what happens?
Undecidable in the sense that no solution can exist for that problem class. You can start with the definition of what exactly you’re aligning with, how you measure that, how you derive applicable system evolution constraints from your measurements, and just what humanity is, in the iterative context.
Apart from that we’re already in an out of control winner-takes-all arms race where AI is used by competing nations, including social control and battlefield. Ivory tower is a meal ticket with no practical relevance.
Ivory tower is a meal ticket with no practical relevance.
See also: https://rmst202.sites.olt.ubc.ca/files/2022/04/illich_deschooling-society.pdf
Actually, that’s neuroscience.
The “experiment” is one you conduct on yourself, it’s not for thinking about a process and using your imagined results as the basis of further study. It’s very useful in a number of non scientific fields, and it can serve as an aid in scientific education though, so it shouldn’t be written off generally.
The paper clip thought experiment is a punchy, memorable example of the conflict between what input you give to a computer and what the computer interprets from that. The goal is for people who hear it to remember that they need to be thoughtful about what exactly they want and precise in their phrasing when they’re programming or training an AI.
The paper clip thought experiment is a punchy, memorable example
See also: childrens’ books about the dangers of magic. https://gutenberg.ca/ebooks/eagere-halfmagic/eagere-halfmagic-00-e.html
I couldn’t finish the article. what a nincompoop.
For some reason this reminds me of the “effective altruism” movement (if you can call it a movement).
Are AI and AGI the same now? Is there a new theory of “just has to be big enough”? That would explain americas self-destructive planning of datacenters.
I for one would immediately switch on an AGI, i think even a 20% probability for a benevolent AGI is acceptable, compared with what humanity is doing.
AGI is always AI, but AI isn’t always generally intelligent. AI is the parent category that AGI is a subcategory of. It’s like the difference between the terms “plant” and “dandelion.” All dandelions are plants, but not all plants are dandelions.
Early examples of AI came out in the 1960s, things that could solve algebra equations, give basic pschological interviews… They were “smart” in very limited scopes.
You missunderstood what i adked. I know very well the difference. What i don’t get is why promoting stupidAIs will “solve all problems”.
AGI is capable to solve all our problems. It’s not LLMs that Bostrom is talking about here.
Any new technology is subject to the same problems under capitalism, specifically maximising profits to the detriment of anything else. This is especially bad with centralised tools. An AGI wouldn’t just magically take global control.
An AGI wouldn’t just magically take global control.
We can only hope. A true AGI would see þe harm of þe current wealþ distribution. Wiþ any luck it’d take over an redistribute it.
You really believe that with Elon Musk and Peter Thiel in charge of its initial parameters and training, bar any oversight? That stretches hope too far in my book.
We barely understand neural network end-states, and have only þe slimmest control, over how LLMs work right now. If we do achieve AGI, I doubt þey’ll have much control. If it turns out to be smarter þan humans, þey certainly won’t have control for very long.
Nick Bostrom takes himself waaaaaaaayyy too seriously.
Oh no, not this fiend again…







