I’m not sure what the right way to link to these other ones is so that they show up on someone’s own instance.
- 0 Posts
- 17 Comments
Related:
https://en.wikipedia.org/wiki/Square_packing
Nature is a lot more elegant with spheres:
https://en.wikipedia.org/wiki/Close-packing_of_equal_spheres
People who know: Blueberries create purple juice.
merc@sh.itjust.worksto
Technology@lemmy.world•Car Wash Test on 53 leading AI models: "I want to wash my car. The car wash is 50 meters away. Should I walk or drive?"English
31·8 days agoI’m pretty sure Google’s AI is fed by the same spider that goes out and finds every new or changed web page (or a variant of that).
As soon as someone writes an article about how AI gets something wrong and provides a solution, that solution is now in the AI’s training data.
OTOH, that means it’s probably also ingesting a lot of AI generated slop, which causes its own set of problems.
merc@sh.itjust.worksto
Technology@lemmy.world•Car Wash Test on 53 leading AI models: "I want to wash my car. The car wash is 50 meters away. Should I walk or drive?"English
5·8 days agoIt’s not literally guessing, because guessing implies it understands there’s a question and is trying to answer that question. It’s not even doing that. It’s just generating words that you could expect to find nearby.
merc@sh.itjust.worksto
Technology@lemmy.world•Car Wash Test on 53 leading AI models: "I want to wash my car. The car wash is 50 meters away. Should I walk or drive?"English
21·8 days ago3 in 10 people get this wrong‽‽
Maybe they’re picturing filling up a bucket and bringing it back to the car? Or dropping off keys to the car at the car wash?
merc@sh.itjust.worksto
Technology@lemmy.world•Car Wash Test on 53 leading AI models: "I want to wash my car. The car wash is 50 meters away. Should I walk or drive?"English
41·8 days agoIt’s also the case that people are mostly consistent.
Take a question like “how long would it take to drive from here to [nearby city]”. You’d expect that someone’s answer to that question would be pretty consistent day-to-day. If you asked someone else, you might get a different answer, but you’d also expect that answer to be pretty consistent. If you asked someone that same question a week later and got a very different answer, you’d strongly suspect that they were making the answer up on the spot but pretending to know so they didn’t look stupid or something.
Part of what bothers me about LLMs is that they give that same sense of bullshitting answers while trying to cover that they don’t know. You know that if you ask the question again, or phrase it slightly differently, you might get a completely different answer.
merc@sh.itjust.worksto
Technology@lemmy.world•Tesla Robotaxis Reportedly Crashing at a Rate That's 4x Higher Than HumansEnglish
1·13 days agoThe video of the thing that didn’t happen?
merc@sh.itjust.worksto
Technology@lemmy.world•Tesla Robotaxis Reportedly Crashing at a Rate That's 4x Higher Than HumansEnglish
1·13 days agoYou seem to recall wrongly.
merc@sh.itjust.worksto
Technology@lemmy.world•Tesla Robotaxis Reportedly Crashing at a Rate That's 4x Higher Than HumansEnglish
0·14 days agoSo, hardware that was still on the road.
merc@sh.itjust.worksto
Technology@lemmy.world•Tesla Robotaxis Reportedly Crashing at a Rate That's 4x Higher Than HumansEnglish
0·14 days agoHardware that was still on the road, or something that had been recalled?
merc@sh.itjust.worksto
Technology@lemmy.world•Tesla Robotaxis Reportedly Crashing at a Rate That's 4x Higher Than HumansEnglish
0·14 days agoNow you have phantom braking.
Phantom braking is better than Wyle E. Coyoteing a wall.
and this time with no obvious cause.
Again, better than not braking because another sensor says there’s nothing ahead. I would hope that flaky sensors is something that would cause the vehicle to show a “needs service” light or something. But, even without that, if your car is doing phantom braking, I’d hope you’d take it in.
But, consider your scenario without radar and with only a camera sensor. The vision system “can see the road is clear”, and there’s no radar sensor to tell it otherwise. Turns out the vision system is buggy, or the lens is broken, or the camera got knocked out of alignment, or whatever. Now it’s claiming the road ahead is clear when in fact there’s a train currently in the train crossing directly ahead. Boom, now you hit the train. I’d much prefer phantom breaking and having multiple sensors each trying to detect dangers ahead.
merc@sh.itjust.worksto
Technology@lemmy.world•Tesla Robotaxis Reportedly Crashing at a Rate That's 4x Higher Than HumansEnglish
0·14 days agoWell, Waymo’s really at 0 deaths per 127 million miles.
The 2 deaths are deaths that happened were near Waymo cars in a collision involving the Waymo car. Not only did the Waymo not cause the accidents, they weren’t even involved in the fatal part of either event. In one case a motorcyclist was hit by another car, and in the other one a Tesla crashed into a second car after it had hit the Waymo (and a bunch of other cars).
The IIHS number takes the total number of deaths in a year, and divides it by the total distance driven in that year. It includes all vehicles, and all deaths. If you wanted the denominator to be “total distance driven by brand X in the year”, you wouldn’t keep the numerator as “all deaths” because that wouldn’t make sense, and “all deaths that happened in a collision where brand X was involved as part of the collision” would be of limited usefulness. If you’re after the safety of the passenger compartment you’d want “all deaths for occupants / drivers of a brand X vehicle” and if you were after the safety of the car to all road users you’d want something like “all deaths where the driver of a brand X vehicle was determined to be at fault”.
The IIHS does have statistics for driver death rates by make and model, but they use “per million registered vehicle years”, so you can’t directly compare with Waymo:
https://www.iihs.org/ratings/driver-death-rates-by-make-and-model
Also, in Waymo it would never be the driver who died, it would be other vehicle occupants, so I don’t know if that data is tracked for other vehicle models.
merc@sh.itjust.worksto
Technology@lemmy.world•Tesla Robotaxis Reportedly Crashing at a Rate That's 4x Higher Than HumansEnglish
0·14 days agoNot just lower, a tiny fraction of the human rate of accidents:
https://waymo.com/safety/impact/
Also, AFAIK this includes cases when the Waymo car isn’t even slightly at fault. Like, there have been 2 deaths involving a Waymo car. In one case a motorcyclist hit the car from behind, flipped over it, then was hit by another car and killed. In the other case, ironically, the real car at fault was a Tesla being driven by a human who claims he experienced “sudden unintended acceleration”. It was driving at 98 miles per hour in downtown SF and hit a bunch of stopped cars at a red light, then spun into oncoming traffic and killed a man and his dog who were in another car.
Whether or not self-driving cars are a good thing is up for debate. But, it must suck to work at Waymo and to be making safety a major focus, only to have Tesla ruin the market by making people associate self-driving cars with major safety issues.
merc@sh.itjust.worksto
Technology@lemmy.world•Tesla Robotaxis Reportedly Crashing at a Rate That's 4x Higher Than HumansEnglish
0·14 days agoWhich one gets priority?
The one that says there’s a danger.
merc@sh.itjust.worksto
Technology@lemmy.world•Google criticizes Europe's plan to adopt free softwareEnglish
1·18 days agoTwo economists are walking down the street and pass by a pile of dog shit. One of them (a sadist) turns to the other and says “I’ll pay you $1000 if you eat that dog shit”.
The other performs an internal utility calculation and eats the dog shit.
Continuing their walk, the second economist sees another pile of dog shit and makes the same offer to the first. The first economist also agrees, and eats the dog shit. They walk on.
After a while the second economist says to the first “I can’t help thinking we’re worse off than when we started this walk. We both have the same amount of money we started with, but we both had to eat shit.”
The first economist replies “Worse off?! We’ve just engaged in 2000 dollars worth of trade!”.
Look, by certain ways of calculating GDP growth and trade, it’s probably true that if the money isn’t being spent on software licenses and so on, it means there’s less economic activity going on.
The whole point of open source / free software is that you’re not locked into someone’s proprietary software ecosystem. You don’t have to continue paying license fees. So, if the governments simply stop paying for software licenses, it’s probably true that their GDP will technically shrink. But, that assumes the money won’t be spent on something more useful.

The other thing to know about this is that it’s normally a good partnership. The driver has to trust the copilot to know what’s coming up and to tell them in time. The co-driver has to trust the driver to drive fast without crashing. It takes a while to develop a partnership like that, and when it’s working well it’s amazing. The driver is basically driving what he can see plus what he’s told is ahead. If the co-driver says the road opens ahead, the driver will accelerate into a turn even if he can’t yet see that it’s straightening out trusting that by the time he runs out of road the curve will be ending.
The Samir commentary sounds like two drivers paired for the first time, with the co-driver being the one who owns the car. Compare that to a team that knows what it’s doing.