Over the past few weeks, several US banks have pulled off from lending to Oracle for expanding its AI data centres, as per a report.
This looks desperate. They already sold $300B worth of data center capacity to OpenAI and this move will save them up to $10B.
They’re supposedly depreciating their GPUs over 7 years. Apparently these data center GPUs are only used for about 3, as the generational improvements in efficiency almost dictate you replace them + the first stragglers start failing around that mark anyway
They could actually be proper fucked financially if they’re cooking their books like that to seem more profitable than they are.
The bubble popping seems inevitable at this point. Before the Giants were funding this by their core business plus loans backed by their core business. Now they’ve stretched their credit so much that no one’s giving them loans anymore and instead of cutting back on the building spree they’re making cuts to their core business.
They’re betting that their customers are so locked in that they won’t leave despite degradation in service. How deep oracle, AWS, googles hooks are in people remain to be seen, people seem to tolerate a lot of enshitification, but there’s gotta be a tipping point. Once they reach that and the core business crashes all the rest of the dominos will fall.
that is why they are trying to peddle this to governments in EU, USA so heavily, they know they will take on AI at face value, instead of testing the efficacy of using AI.
Once these companies have to start charging what it really costs to maintain and run these huge models. The number of use cases will shrivel.
Models are becoming more optimized. I’ve recently tried LFM2.5, small version, and it’s ridiculously close in usefulness to Qwen3.5, for example. Or RNJ-1.
To maintain, meaning actualized datasets - well, sort of expensive, but they were assembling those as a side effect of their main businesses.
So this is not what’ll kill them. Their size will. These are very big companies with lots of internal corruption and inefficiency pulling them down. And a few new AI companies, which, I think, are going to survive, they are centered around specific products, some will die, but I’d expect LiquidAI or Anthropic or such to still be around some time after the crash.
The crash might coincide with a bubble burst, but notice how this family of technologies really is delivering results. Instead of a bunch of specialized applications people are asking LLMs and getting often good enough answers. LLM agents can retrieve data from web services, perform operations, assist in using tools.
You shouldn’t look at the big ones in the cloud, rather at what value local LLMs give you for energy spent. Right now it’s not that good, but approaching good honestly. I don’t feel like they’ve stopped becoming better. Human time is still more expensive. The tools are there, and are being improved, and the humans are slowly gaining experience in using them, and that makes them more efficient in various tasks.
It’s for all kinds of reference and knowledge tools what Google was for search.
And there’s one just amazing thing about these models - they are self-contained, even if some can use tools to access external sources. Our corporate overlords have been building a dependent networked world for 20 years, simply to break it by popularizing a technology that almost neuters that. They were thinking, probably, that they were reaping the crops of the web for themselves, instead they taught everyone that you don’t have to eat at the diner, you can take the food home.
Only people who know very little about a field feel like AI “is good enough” for that field. Experts in a field will universally say that AI is shit in their field.
LLMs are the extreme example of “the dumb man’s idea of a smart man.” It sounds like it knows what it’s talking about so people ignorant on the subject don’t know it’s full of shit.
I agree with you and I consider it similar to the ‘hollywood effect’: Ask any expert to review typical depictions of their expertise in film and tv and they will mostly groan at the inaccuracies that most people won’t catch.
Problem is that if you compare the works that do it ‘right’ to the ones that do it ‘wrong’, there’s no correlation between doing it right and being more popular, the horribly wrong depictions get plenty of ratings regardless.
Now one might reasonably argue ‘sure, but that’s purely fiction anyway, if it had real consequences, that would actually matter’, except it constantly happens in real world situations.
My work colleague picked up his car from some mechanic chain after having it ‘fixed’ and took us to lunch. There was just this awful squeal as he started the car and I said why is it making that noise after just getting fixed and the guy said “Oh, the staff told me that cars just sound like that after a repair until the parts break in” and that bullshit worked to get him to pay and walk out the door. I ask if I can take a quick look under his hood and there was a flashlight wedged against a belt. He just laughed it off and said “hey, free flashlight, thanks for figuring that out” and a few months later he had mentioned going back to the exact same place for something else.
A few days ago I went to a hardware store and their site said they had it, but under location it said “see associate”. The first one checked his device and didn’t understand what the deal was so he said “Oh, go over there and ask John, he knows all this stuff”. Ok, so I walk over to John, who takes one glance and confidently says “oh yeah, that stuff is in a cage in the back row locked up, just go up to the cage and press the button to get someone to get it”. I think “ok, good, a guy who really knows his stuff and the other staff recognize him for it”. I roll up to the cage and look in and realize “uh oh, this is not the type of stuff I’m looking for, he made a pretty amateur mistake”, but I push the button anyway. I show my phone to the guy who comes up and said that “John” said it would be here but I couldn’t see it, and at the mention of “John” the guy clearly rolled his eyes and it was abundantly clear that John’s “expertise” was a repeated annoyance for the guy. The actual answer is they kept that stuff in back and the employees all are supposed to see the notation in their devices telling them this, but none of them seem to figure it out and John just keeps sending people to his department instead.
This has also come out in use of AI. I offered that my group could crank out a quick tool to do something that could be a problem, and one of the people said “in this new era, we don’t need you for this quick tool, I just asked Claude and it made me this application”. So I tested it and reported that ‘a’, it didn’t actually work, it produced stuff that looked right, but the actual tool wouldn’t accept it because it didn’t se the right syntax, and ‘b’, if t did work, it faked authentication and had a huge vulnerability. He just laughed it off and said ‘guess LLMs sometimes aren’t perfect yet’, no consequences for what could have been a disastrous tool, no severe change in stance on using LLMs, and I am pretty sure the audience probably found the response about it not working to be annoyingly buzzkill and were rooting for the LLM to do all the work instead. People who need your expertise are desparate to not need your expertise anymore and are willing to believe anything to enable that, and are willing to accept a lot of badness just to not be dependent on you.
AI produce what is seen as plausible narrative, and plausible narrative can win even when the facts are against it. To be very charitable, a quick “usually” correct answer is indeed frequently “good enough” for a lot of purposes, and LLM’s speed at generating output can’t be beat.
Bad craftsman blames his tools is what I’d answer to this.
Perfect time for some foreign company to eat Oracles lunch.
What does Oracle even do?
Create a db that sucks so bad you have to hire them to maintain it.
Sues their customers
Charge people who accidentally used their Java SDK.
They sell software that sits so deep in people’s stack that replacing it takes tons of effort. Companies calculate that it’s cheaper to keep paying Oracle than to rewrite crucial services.
I’ve been a software engineer for over 20 years now and tbh I couldn’t tell you even if my life depended on it. I know it’s a shit tier hosting service that people use because they offer 5$ worth virtual server for free with a valid credit card but that’s about it.
It’s one of those ancient paper shuffling IT companies that is 95% sale/middle mamager leeches, 5% wizard engineers carrying everything on their shoulders.
Sounds a lot like IBM.
At least IBM used to be cool and gave us things like SQL, DRAM and Thinkpads. Other than Java kits I couldn’t name a single useful initiative from Oracle. They just take existing inventions and shuffle enterprise papers.
Used to be cool…you might want to look a little bit further into IBMs past, specifically what they were doing during WW2…
All is justified if it gave us Thinkpad 🙂↕️
Is this hyperbole? I really doubt someone can be a SWE for even 2 years and not know what oracle does…
DATACENTERs apparently, especially for AI.
This is technically job loss caused by AI…
More accurately its caused by AI Mania, not AI proper, directly, but yeah.
Note that the article is from the beginning of February.
Why did it take so long? You’d think these articles are staggered at the request of the owner So the small army he fired doesnt revolt
Fuckin explains a lot of the last month’s issues
Pop that bubble, baby!
This is the fascinating thing about this bubble. Usually people are suspecting a bubble/perceiving it, and are afraid of when it pops, but no one really wants it to pop, they just don’t like the fragility it causes knowing it could pop any minute.
So many people actively want the AI bubble to pop. I can’t recall a bubble so odious that everyone was rooting for it to hurry up and fail before.
One rich asshole called Larry Ellison.
which is being encouraged by Jensen
YEEEEESSSSSS~
Real jobs or AI jobs.
It’s Oracle: It’s not like they deliver value either way.
Real Temporary Jobs building and managing the AI Datacenters.
That’s a good point to remember when future job numbers are shared.
Pull out, don’t pull out, we’re fucked either way.
old news!













