• 0 Posts
  • 6 Comments
Joined 2 months ago
cake
Cake day: January 15th, 2026

help-circle

  • It’s a really dumb way to frame what the OpenAI people actually said on this - they are saying that the people applying to them want to know how many tokens they can use as a tool to accomplish the job they are applying for. There’s a fundamental difference to compensation here to compensation, where tokens as compensation would be how many tokens the people applying for the job would be able to utilize for their own purposes, whatever they may be.

    To illustrate - I would probably be reluctant to work for a company which would not be willing to spend the amount of money that would get me a more or less top of the line computer with which to perform my job. Not because I consider my company-provided development machine as a part of my compensation - it is merely a tool I use for my job.

    The people applying for these jobs are the kinds of people who think that burning an exorbitant amount of tokens will make them quite significantly more productive, so the metaphor of having the best tools available to accomplish the task at hand extends here, in accordance with their belief system.

    There’s then the quote from the VC ghouls, but I don’t think anyone could accuse them of being competent to any significant degree, so their quotes are most appropriately used as toilet paper.


  • Sure, but that can be said about almost anything.

    Still, I’d be surprised if they went the route of embedding ads into the stream, in part because of measurability/skipability/etc. It’s definitely not out of the question, but I think we’re still ways to go before we get there.

    And even then, tools like yt-dlp would probably be able to apply some heuristics to figure out which segments are foreign to the stream and slice them out that way. Blocking yt-dlp would require DRM, which in turn requires changing the transcoding pipeline in a pretty non-trivial way. I also doubt they would willingly go this route.