Horseshit. They’ve now hit 4 schools and 13 hospitals. It’s all intentional. The US military is a terrorist organization.
There are no mistakes, just happy accidents
It’s not complicated; if you ‘delegate’ a war crime to the yes machine, you remain the war criminal.
Nope, this was 100% human intent.
They’re making AI an excuse for their cockups. I believe that is why the pentagon is partnering with OpenAI/ anthropic to be their scapegoats.
Its crazy how they think that people are stupid/ sanewashing us in the face.
Tbe entire point of military use of AI is ‘plausible’ deniability, so they can deflect responsibility as they increase their civilian targeting. Israel mechanized this strategy in Gaza with their Lavender and Where’s Daddy AI systems

And my favourite edit:

Just replace management with military and you’ve got what ishell and the merkins with the support of the epstein class are doing
Which of course was turned into:
“A computer can never be held accountable, that’s why it’s perfect for management decisions”
Quite possibly one of the best things that ever came out of IBM, and one my my favorite products of 1979 (along with “Alien”). Of course, Weisenbaum had a thing or two to say about the ELIZA Effect previously in 1966. As usual, nobody listened.
AND IT WILL FAIL YOU
Counterargument: 99% of non-hardware-issue-related “misbehaviour” of computers is actually PEBKAC.
Your failure to use a computer =/= computer failing you
The problem; a school full of kids gets bombed into nonexistence. This exists as a result of a program working as intended; A neural network designed by a militaristic surveillance regime to harvest information and spit out ‘likely’ targets for subjugation, that by design cannot know any part of reality outside of the information it’s given, sits pretty fucking far away from the keyboard.
These AI systems are controlled. They are managed. They are being used as a shield and a scapegoat by the people that developed them and sold them to the militaries of the world as a perfect target acquisition solution, when it has proven time and time again that it can’t even tell the difference between people or do simple tasks without making shit up.
I’ve seen this same attitude of “Well there’s no way that the way I use an AI is bad in any way, must be a you problem.” when talking about a LLM making someone go off the rails or killing themselves and I’m tired of hearing it used as an argument.
I think you’re overshooting with your response.
My statement had nothing to do with LLMs, or targeting Iranian schools. I was simply responding to the previous statement that a computer will fail you.
Computers don’t fail people. They’re precision instruments, and anything short of a cosmic ray bit-flip, or hardware failure, will not result in a failed execution of its instructions. Therefore the computer didn’t fail YOU - YOU failed to provide adequate instructions for it to do what you wanted.
Even in your reply you specified that these neural nets are being given specific information to train target selection on. Sure, the AI is nowhere near the keyboard (well, it actually is, but more on that later), but then again, WHO fed the AI the training data? WHOSE decision resulted in the AI spitting out a school as a viable bombing target? Ultimately a human sits at the top of the chain, even if it was multiple automated AI systems that collated, labelled, sorted and managed the training dataset. The reason why those highly advanced neural nets didn’t work the way they were expected to (even accounting for the NN black box effect), is, ultimately, down to human failure.
Mind you by failure here I am simply talking about expected outcome vs what happened. The expected outcome being the AI providing 100% viable combat targets, active combatants, military bases, etc., not a school full of children. Because while the cheetoh in chief might be a raging narcissistic child rapist bastard, I doubt most of the rest of the US armed forces agrees with bombing children, so the original goal of the AI had to be to provide said viable targets. Therefore there had to be a human component that provided the data to skew the targeting, and that human component was most definitely sitting in a chair in front of a keyboard…
Exactly, the invention of the “Corporation” under Capitalism served as a means to negate economic responsibility now they have invented AI to negate operative responsibility.
I feel like the people who created and sold these programs should be considered no different than people who create a biological or nuclear weapon of mass destruction. It’s working as intended and the people who created, enabled, and used it should be held accountable.
But destruction is not the fault of the technology, it’s the fault of the people who used it to create a weapon of mass destruction while fighting global AI treaties and regulations in favor of greed and power.
Nuclear material has the potential to create something that can destroy the world, but it also has the potential to create something that could save humanity depending on how it’s used by the people who possess the material.
Biolabs have used pathogens with pandemic potential to make weapons that destroy, but they also created the first vaccines against those pathogens, and eventually developed methods to create non-live vaccines.
Proper use of AI would require transparency and regulations that place the good of all humanity before the good of the individual nation or corporation. It would be difficult to achieve, but not impossible. Destructive use seems less inherent to the technology itself than to human traits like greed and selfishness being permitted by society.
The LLMs that are designed to manipulate people to keep using a product are somewhere between cigarettes/gambling and a gun. I think they’re definitely harmful and require regulations and restrictions. At the bare minimum I think there should be some kind of mandatory warning label, or link to reach out for help always included at the bottom of the screen just to remind people of the reality of what they’re using when they use it.
I honestly kind of hate them, but I also don’t think we need to try and banish them from society even if they don’t really have the same potential for improving humanity. I think at best they serve as time savers the same way using a calculator to do simple math saves us time, but also makes us a little dumber/less skilled in the long run.
I remember the countless excuses for similar attacks on schools and hospitals.
The fault is either organizational where no single person can be blamed, or it is the action of a low ranking individual who takes the fall.
Often it is the fault of the enemy, who “used a children’s hospital as a meeting point”, so it is clearly their fault!
Just read up on the 2015 Kunduz Hospital Airstrike for one example, the Lions Lead By Donkeys podcast has a good episode on it.
Wow that’s crazy. Software problem. Nothing you can do about that I guess. It is what it is. Could happen to anyone, really. Those who have never murdered a school full of children can throw the first stone and such.
Guess Iran really does get to throw the first stone.
You mean to say that the machine that always agrees with you and is confidently wrong all the time did what it was built to do?
How convenient! Make no mistake, it was completely intended.
Of course it fucking did. This is what happens when you rely on AI to automate your war crimes.
The Lavender precedent: automated kill lists and the limits of International Humanitarian Law
In late 2024, the UN verified that nearly 70% of those killed were women and children (Farge 2024). AOAV’s data on civilian harm following explosive weapon use in Gaza puts that percentile even higher. Evidence recorded in the classified Israeli military database in May 2025 revealed that only 17% of the 53,000 Palestinians killed in Gaza were combatants. This implies that 83% were civilians (Graham-Harrison & Abraham 2025).
LOL. I am pretty sure that this wasn’t an AI mistake. And wasn’t a mistake at all. It was intentional.
Well, those chat bots failed to run a vending machine, but they want to use them for autonomous weapons.
IMO they are setting things up so if a tactical nuke somehow makes it to the battlefield nobody will be responsible.
The only thing this tech is good for is plausible deniability







