Just assuming this is all true (i.e. that AI can do good and bad code outputs), why would Linux development be able to succeed at something that Microsoft (which has an insider track with AI, far more money, and far more maturity) failed at?
The same reason any personal projects (and not using it to diminish what linux projects are but to say that the people working on them do it because they want the project to progress, not because of any financial incentive) can do better then commercial projects: where the passion is at.
Someone just looking to get paid is more likely to say “ok this is good enough” and move on to the next thing. They are more likely to have managers breathing down their necks to get something done by some arbitrary deadline, too.
It’s why indie games have been able to compete with AAA games. The latter are following a formula to get paid, plus are more willing to make compromises in the name of either saving costs or increasing revenue. The former just want to make their fun idea reality.
Also, MS has invested a ton of money into AI and seem to be getting desperate for a return on that. Which means there’s a certain amount of denial about the quality. It’s not just a tool to them, but a tool they desperately need to work and prove it’s worth throwing a ton of money at.
But for anyone that it’s simply a tool for, it can be useful. They are great rubber duckies. Like my last interaction with one was a case where it did horribly and was completely wrong about what “we were discussing”, but I still got to the right conclusion despite it because going through the conversation helped me think it through.
And though it makes a lot of mistakes, its feedback isn’t always wrong. The fact that it can rehash previous things from its history means its good at spotting new instances of problema that have already been solved. So accepting bug reports should be fine, just with the understanding that they each need to be looked at and some reports will need to be rejected because they are wrong.
Weird, everybody talking about using AI for videogames seems to praise its ability to speed up the process of developing things. Big studio after studio getting caught with placeholders and whatnot. Does that really make your point? Because it seems to do the opposite.
My point isn’t AI is good or bad, but that the difference is how much it gets leaned on.
In this case, it’s how AI is (assumed to be) used at MS vs how it was used in the OP.
MS appears to be heavily leaning on generative AI producing code. In my own experience, that is pretty good these days at responding to a prompt with a series of actions that achieves the desire of the prompt, but is bad at creating an overall cohesion between prompts. It’s like it’s pretty good at making lego blocks but if you try putting it all together, it looks like you built something from 50 different sets, plus the connections between the blocks are flawed enough that it’s liable to collapse the more you put it together.
In the OP, AI is being used to submit bug reports. This one can be thought of as using an AI to write a book report instead of using an AI to write the book in the first place. If the AI writes a shitty report, it has zero effect on the book itself. But the AI might just include a list of all the typos in its report, which is useful for correcting the errors in the book.
Also, game studios forgetting to replace placeholders is yet another issue more on the process itself, though it can also show a lack of attention to detail and maybe indicate that an AI was handling more of the process. A decent system would flag all assets for whether they are placeholders or final and then include a review of all flags before publishing to catch something like this.
So this isn’t a general defense of using AI, I’m just saying that it’s possible to use it without everything it touches turning to slop, but that it often isn’t used like that, resulting in slop.
And it’ll be easy to fall into the slop trap, what with how it’s always making leaps and bounds inprovements that help with instances of it fucking up but don’t resolve the fundamental issues that will probably mean LLMs will always produce some sort of slop (because everything boils down to some sort of word association, just with a massive set of conditional probabilities encoded into it that gives it the illusion of understanding).
Sure if you only focus on the desktop market I guess you could make that argument, but IDK why you would ignore servers and phones? There are plenty of examples of Linux kicking Microsoft’s ass. You think Microsoft is happy they don’t sell server licenses for every server on earth? What about android?
Do you actually have a reason Linux will be able to pull off using AI when Microsoft cannot, or is your sole argument that Linux has done other things? Because that’s not how proof works.
My argument is they are different groups of people with completely different incentive structures, so of course they will be different. You’re acting like Microsoft is failing because they use AI, not because they have management forcing the use of AI.
I’m definitely not an “AI is going to write all the code” kind of person, but LLMs are definitely a useful tool for prototyping and other development processes. A project with a “No AI” rule is not inherently better than a project that uses AI as a tool.
Ok, let’s flip this around. You made the baseless claim that “Linux development can’t succeed where Microsoft fails” that seems pretty baseless and historically incorrect to me. But if you just want to keep trying to “win” this interaction and don’t want to have a conversation I guess there’s nothing left to say
No, I asked why anyone would assume Linux developers would get anywhere with AI, looking for anybody with a legitimate reason and not baseless speculation.
Motivation is powetful influence on devolopment. The linux kernel is largley driven by UX and desire for technical excellence (there are ultier motive from some major factions but overall this is true and actions are judged publically as such).
Microsoft is, like most companies, driven by stockholder value creation.
One produces an enviroment in which cautious adoption of new tech is constant, a slow trickle for use where it seems most applicable.
The other demands that the perception of exclusive capital be created through vertical intergration with propritary IP and that the promise of cost reductions are underway. Aka Microslop trying to add a buzz word to every IP (percieved capital creation) and promising massive layoffs.
Microsoft has had a lot of resources for decades and sucked at the most basic stuff the whole time. Not taking a stance on AI usage here, just saying that the idea that a company having more money is rarely connected to the quality of the product they create and, in fact, chasing profits often leads to products being worse.
Could be a lot of reasons. A big one i see working at a large company myself is that AI needs to draw from a lot of data to do its work. A huge amount of contextual data too. A company like MSFT inevitably needs to provide AI with a walled-off curated set of data, and prevent any of it from leaking. Its AIs will not have the same amount of data an AI can draw from outside MSFT.
Leaking? Microsoft basically owns OpenAI. They pull the data in and don’t need it to go out. The whole industry is fighting to close off competition, meaning they know they’re on top.
So do you have any reason to assume the open-source community’s use of these (closed-source) other models is somehow bucking all real-world evidence to the contrary, or are we just hoping and praying?
Just assuming this is all true (i.e. that AI can do good and bad code outputs), why would Linux development be able to succeed at something that Microsoft (which has an insider track with AI, far more money, and far more maturity) failed at?
The same reason any personal projects (and not using it to diminish what linux projects are but to say that the people working on them do it because they want the project to progress, not because of any financial incentive) can do better then commercial projects: where the passion is at.
Someone just looking to get paid is more likely to say “ok this is good enough” and move on to the next thing. They are more likely to have managers breathing down their necks to get something done by some arbitrary deadline, too.
It’s why indie games have been able to compete with AAA games. The latter are following a formula to get paid, plus are more willing to make compromises in the name of either saving costs or increasing revenue. The former just want to make their fun idea reality.
Also, MS has invested a ton of money into AI and seem to be getting desperate for a return on that. Which means there’s a certain amount of denial about the quality. It’s not just a tool to them, but a tool they desperately need to work and prove it’s worth throwing a ton of money at.
But for anyone that it’s simply a tool for, it can be useful. They are great rubber duckies. Like my last interaction with one was a case where it did horribly and was completely wrong about what “we were discussing”, but I still got to the right conclusion despite it because going through the conversation helped me think it through.
And though it makes a lot of mistakes, its feedback isn’t always wrong. The fact that it can rehash previous things from its history means its good at spotting new instances of problema that have already been solved. So accepting bug reports should be fine, just with the understanding that they each need to be looked at and some reports will need to be rejected because they are wrong.
Weird, everybody talking about using AI for videogames seems to praise its ability to speed up the process of developing things. Big studio after studio getting caught with placeholders and whatnot. Does that really make your point? Because it seems to do the opposite.
My point isn’t AI is good or bad, but that the difference is how much it gets leaned on.
In this case, it’s how AI is (assumed to be) used at MS vs how it was used in the OP.
MS appears to be heavily leaning on generative AI producing code. In my own experience, that is pretty good these days at responding to a prompt with a series of actions that achieves the desire of the prompt, but is bad at creating an overall cohesion between prompts. It’s like it’s pretty good at making lego blocks but if you try putting it all together, it looks like you built something from 50 different sets, plus the connections between the blocks are flawed enough that it’s liable to collapse the more you put it together.
In the OP, AI is being used to submit bug reports. This one can be thought of as using an AI to write a book report instead of using an AI to write the book in the first place. If the AI writes a shitty report, it has zero effect on the book itself. But the AI might just include a list of all the typos in its report, which is useful for correcting the errors in the book.
Also, game studios forgetting to replace placeholders is yet another issue more on the process itself, though it can also show a lack of attention to detail and maybe indicate that an AI was handling more of the process. A decent system would flag all assets for whether they are placeholders or final and then include a review of all flags before publishing to catch something like this.
So this isn’t a general defense of using AI, I’m just saying that it’s possible to use it without everything it touches turning to slop, but that it often isn’t used like that, resulting in slop.
And it’ll be easy to fall into the slop trap, what with how it’s always making leaps and bounds inprovements that help with instances of it fucking up but don’t resolve the fundamental issues that will probably mean LLMs will always produce some sort of slop (because everything boils down to some sort of word association, just with a massive set of conditional probabilities encoded into it that gives it the illusion of understanding).
If you take a step back, why would Linux development be able to succeed at all when Microsoft has far more money, more maturity, and more employees?
@ExperiencedWinter@lemmy.world, my question was simple: do you have a reason to make the assumption Linux developers will succeed.
Instead, you’ve jumped to whataboutisms, misdirections (Linux exists, therefore…?), even trying to shift the burden of proof back onto the skeptic.
If you can’t back up your opinion with evidence, say so from the beginning.
(Edit: Invalid comparison and misdirection.)
Sure if you only focus on the desktop market I guess you could make that argument, but IDK why you would ignore servers and phones? There are plenty of examples of Linux kicking Microsoft’s ass. You think Microsoft is happy they don’t sell server licenses for every server on earth? What about android?
What about Android…?
Sure, what about Google?
Do you actually have a reason Linux will be able to pull off using AI when Microsoft cannot, or is your sole argument that Linux has done other things? Because that’s not how proof works.
My argument is they are different groups of people with completely different incentive structures, so of course they will be different. You’re acting like Microsoft is failing because they use AI, not because they have management forcing the use of AI.
I’m definitely not an “AI is going to write all the code” kind of person, but LLMs are definitely a useful tool for prototyping and other development processes. A project with a “No AI” rule is not inherently better than a project that uses AI as a tool.
If your claim is baseless, don’t fight to make it.
Ok, let’s flip this around. You made the baseless claim that “Linux development can’t succeed where Microsoft fails” that seems pretty baseless and historically incorrect to me. But if you just want to keep trying to “win” this interaction and don’t want to have a conversation I guess there’s nothing left to say
No, I asked why anyone would assume Linux developers would get anywhere with AI, looking for anybody with a legitimate reason and not baseless speculation.
(Attempted burden of proof shift.)
Motivation is powetful influence on devolopment. The linux kernel is largley driven by UX and desire for technical excellence (there are ultier motive from some major factions but overall this is true and actions are judged publically as such).
Microsoft is, like most companies, driven by stockholder value creation.
One produces an enviroment in which cautious adoption of new tech is constant, a slow trickle for use where it seems most applicable.
The other demands that the perception of exclusive capital be created through vertical intergration with propritary IP and that the promise of cost reductions are underway. Aka Microslop trying to add a buzz word to every IP (percieved capital creation) and promising massive layoffs.
Microsoft has had a lot of resources for decades and sucked at the most basic stuff the whole time. Not taking a stance on AI usage here, just saying that the idea that a company having more money is rarely connected to the quality of the product they create and, in fact, chasing profits often leads to products being worse.
Could be a lot of reasons. A big one i see working at a large company myself is that AI needs to draw from a lot of data to do its work. A huge amount of contextual data too. A company like MSFT inevitably needs to provide AI with a walled-off curated set of data, and prevent any of it from leaking. Its AIs will not have the same amount of data an AI can draw from outside MSFT.
Leaking? Microsoft basically owns OpenAI. They pull the data in and don’t need it to go out. The whole industry is fighting to close off competition, meaning they know they’re on top.
So do you have any reason to assume the open-source community’s use of these (closed-source) other models is somehow bucking all real-world evidence to the contrary, or are we just hoping and praying?