Reflections on OpenAI

(calv.info)

697 points | by calvinfo 1 day ago

81 comments

  • hinterlands 1 day ago
    It is fairly rare to see an ex-employee put a positive spin on their work experience.

    I don't think this makes OpenAI special. It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.

    Look at it this way: the flip side of "incredibly bottoms-up" from this article is that there are people who feel rudderless because there is no roadmap or a thing carved out for them to own. Similarly, the flip side of "strong bias to action" and "changes direction on a dime" is that everything is chaotic and there's no consistent vision from the executives.

    This cracked me up a bit, though: "As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing" - yes! That's true at almost every company that ends up making morally questionable decisions! There's no Bond villain at the helm. It's good people rationalizing things. It goes like this: we're the good guys. If we were evil, we could be doing things so much worse than X! Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!

    • harmonic18374 1 day ago
      I would never post any criticism of an employer in public. It can only harm my own career (just as being positive can only help it).

      Given how vengeful Altman can reportedly be, this goes double for OpenAI. This guy even says they scour social media!

      Whether subconsciously or not, one purpose of this post is probably to help this guy’s own personal network along; to try and put his weirdly short 14-month stint in the best possible light. I think it all makes him look like a mark, which is desirable for employers, so I guess it is working.

      • m00x 1 day ago
        Calvin cofounded Segment that had a $3.2B acquisition. He's not your typical employee.
        • Xenoamorphous 13 hours ago
          So this guy is filthy rich and yet decided to grind for 14 months with a newborn at home?

          I guess that's why he's filthy rich.

          • kridsdale1 1 hour ago
            I had a chance to join OpenAI 13 months ago too.

            But I had a son 14 months ago.

            There was absolutely no way I was going to miss any of a critical part in my baby’s life in order to be in an office at 2am managing a bad deployment.

            Maybe I gave up my chance at PPU or RSU riches. But I know I chose a different kind of wealth that can never be replaced.

            • sillysaurusx 55 minutes ago
              Wow, ditto! I thought I was the only one who took an extended leave to watch their baby grow up. Totally worth it, and it was a wonderful experience being able to focus 100% on her.
          • teiferer 13 hours ago
            Way to go to keep the boring chores of the first months with the partner and join the fun when the little one starts to be more fun after a year. With all that cash, I'm sure they could buy a bunch of help for the partner too.
            • Xenoamorphous 13 hours ago
              I don't know, when I became a parent I was in for the full ride, not to have someone else raising her. Yes, raising includes changing diapers and all that.
              • pegasus 2 hours ago
                You make it sound like your choice is somehow the righteous one. I'm not convinced. What's wrong with a hiring help, as long as it's well selected? And anyway, usually the help would take care of various errands to free up mom so she can focus on her baby. But maybe they have happily involved grandparents. Maybe he was working part-time. Or maybe there's some other factor we're completely missing on right now.
                • Xenoamorphous 1 hour ago
                  So you sincerely think it’s ok that everybody takes care of the kid but the father because he’s rich and can afford multiple nannies? There’s not much context to miss when TFA has this:

                  > The Codex sprint was probably the hardest I've worked in nearly a decade. Most nights were up until 11 or midnight. Waking up to a newborn at 5:30 every morning. Heading to the office again at 7a. Working most weekends.

                  • pegasus 53 minutes ago
                    Does a household necessarily need multiple nannies to raise a baby? Grandparents might be willing to help and if there's some house help as well, no nannies might be needed at all, as long as the wife is happy with the arrangement, which I don't find impossible to entertain. Yeah, wealth allows for more freedom of choice, that's always been the case, but this type of arrangement is not unheard of across social classes.
            • motbus3 13 hours ago
              There are certain experiences in life that one needs to go through so you keep grounded to what really matters.
              • ToucanLoucan 1 hour ago
                The people who will disagree with this statement would say, full throated, that what really mattered was shipping on time.

                Couldn't be me. I do my work, then clock the fuck off, and I don't even have kids. I wasn't put upon this earth to write code or solve bugs, I just do that for the cash.

            • foobarian 4 hours ago
              Yeah, or, let the partner have the easy period before they are mobile, and when they sleep half the day, and then join the fun when they can walk off into the craft supplies/pantry where sugar/flour/etc. are stored/the workshop with the power tools etc., and when they drop the naptime and instead start waking at 5am and asking you to play Roblox with them.

              Either option is priceless :-)

              • kridsdale1 1 hour ago
                I just went through this period.

                I would not describe it as easy.

            • jajko 12 hours ago
              There is some parenting, then there is good parenting. Most people don't have this option due to finances, but those that do and still avoid it to pick up just easy and nice parts - I don't have much sympathy nor respect for them.

              Then later they even have the balls to complain how kids these days are unruly, never acknowledging massive gaps in their own care.

              Plus it certainly helps the kid with bonding, emotional stability and keeps the parent more in touch emotionally with their own kid(s).

              • ethbr1 9 hours ago
                > Then later they even have the balls to complain how kids these days are unruly, never acknowledging massive gaps in their own care.

                My favorite is ‘I can’t understand why my kid didn’t turn into a responsible adult!’

                Cue look back on what opportunities the parent put them in to learn and practice those skills, over the last 20 years.

            • 47282847 10 hours ago
              You do know that early bonding experiences of newborns are crucial for their lifelong development? It reads like satire, or, if serious, plain child maltreatment.
              • xandrius 9 hours ago
                Pushing it a bit there, aren't we?
          • pegasus 2 hours ago
            It's unlikely he sees or even perceives what he's doing as a grind, but rather something akin to an exciting and engrossing chase or puzzle. If my mental model of these kind of Silicon Valley types is correct, neither is he likely to be in it for the money, at least not at the narrative self level. He most likely was "feelin' the AGI", in Ilya Sutskever's immortal words. I.e. feeling like this might be a once-in-a-million-years opportunity to birth a new species, if not a deity even.
          • mistrial9 5 hours ago
            lots of wealthy families have dysfunctional internal emotional patterns.. A quick stat is that there is more alcoholism among the 1% wealthiest than the general population across the USA
            • meta_ai_x 4 hours ago
              Wow! Wanting to work hard at building cool things == dysfunctional internal emotional pattern

              Sums up western workforce attitude and why immigrants continue to crush them

        • ls-a 19 hours ago
          Which is a YC startup. If you know anything about YC it's the network of founders supporting each other no matter what.
          • blitzar 15 hours ago
            > no matter what

            except if you publicly speak in less than glowing terms their leaders

            • pyman 11 hours ago
              Some books do a good job of documenting the power struggles that happen behind closed doors, big egos backed by millions clashing over ideas and control.

              Not gonna lie, the entire article reads more like a puff piece than an honest reflection. Feels like something went down on Slack, some doors got slammed, and this article is just trying to keep them unlocked. Because no matter how rich you are in the Valley, if you're not on good terms with Sam, a lot of doors will close. He's the prodigy son of the Valley, adopted by Bill Gates and Peter Thiel, and secretly admired by Elon Musk. With Paul Graham's help, he spent 10 years building an army of followers by mentoring them and giving them money. Most of them are now millionaires with influence. And now, even the most powerful people in tech and politics need him. Jensen Huang needs his models to sell servers. Trump needs his expertise to upgrade defence systems. I saw him shaking hands with an Arab sheikh the other day. The kind of handshake that says: with your money and my ambition, we can rule the world.

              • N_Lens 7 hours ago
                Why that's exactly what we desperately need - more "rule the world" egos!
        • eddythompson80 20 hours ago
          That's even more of a reason not to bad mouth other billionaires/billion dollar companies. Billionaires and billion dollar companies work together all the time. It's not a massive pool. There is a reason beef between companies and top level execs and billionaires is all rumors and tea-talk until a lawsuit drops out of no where.

          You think every billionaire is gonna be unhinged like Musk calling the president a pedo on twitter?

          • jajko 11 hours ago
            Hebephile or ephebophile rather than pedo to be precise. And we all saw how great friend he was with epstein for decades, frequent visitor to his parties, dancing together and so on. Not really a shocking statement, whether true or not.
        • ujkiolp 5 hours ago
          sounds exactly like a “typical employee”
        • harmonic18374 1 day ago
          He is still manipulatable and driven by incentive like anyone else.
          • m00x 23 hours ago
            What incentives? It's not a very intellectual opinion to give wild hypotheticals with nothing to go on other than "it's possible".
            • harmonic18374 22 hours ago
              I am not trying to advance wild hypotheticals, but something about his behavior does not quite feel right to me. Someone who has enough money for multiple lifetimes, working like he's possessed, to launch a product minimally different than those at dozens of other companies, and leaving his wife with all the childcare, then leaving after 14 months and insisting he was not burnt out but without a clear next step, not even, "I want to enjoy raising my child".

              His experience at OpenAI feels overly positive and saccharine, with a few shockingly naive comments that others have noted. I think there is obvious incentive. One reason for this is, he may be in burnout, but does not want to admit it. Another is, he is looking to the future: to keep options open for funding and connections if (when) he chooses to found again. He might be lonely and just want others in his life. Or to feel like he's working on something that "matters" in some way that his other company didn't.

              I don't know at all what he's actually thinking. But the idea that he is resistant to incentives just because he has had a successful exit seems untrue. I know people who are as rich as he is, and they are not much different than me.

              • m00x 21 hours ago
                Calvin just worked like this when I was at Segment. He picked what he worked on and worked really intensely at it. People most often burn out because of the lack of agency, not hours worked.

                Also, keep in mind that people aren't the same. What seems hard to you might be easy to others, vice versa.

                • dr_dshiv 21 hours ago
                  > People most often burn out because of the lack of agency, not hours worked.
              • ashdksnndck 16 hours ago
                Why did Michael Jordan retire 3 times? Sure, you could probably write a book about it, but you would want to get to know the guy first.
                • xdavidliu 7 hours ago
                  first time in 93 because of burnout from three peat, and allegedly a gambling problem. second because of the lockout and krause pushing phil out. third because too old
              • pyman 21 hours ago
                Not sure if it's genuine insight or just a well-written bit of thoughtful PR.

                I don't know if this happens to anyone else, but the more I read about OpenAI, the more I like Meta. And I deleted Facebook years ago.

              • gneray 20 hours ago
                i know calvin, and he's one of the most authentic people i've worked with in tech. this could not be more off the mark
                • lucianbr 13 hours ago
                  This reflection seems very unlikely to be authentic because it is full of superlatives and not a single bad thing (or at least not great) is mentioned. Real organizations made of real humans simply are not like this.

                  The fact that several commenters know the author personally goes some way to explain why the entire comment section seems to have missed the utterly unbalanced nature of the article.

                  • m00x 2 hours ago
                    Some teams are bad, some teams are good.

                    I've always heard horror stories about Amazon, but when I speak to most people at, or from Amazon, they have great things to say. Some people are just optimists, too.

                  • harmonic18374 9 hours ago
                    People come out to defend their bosses a lot on this site, convincing themselves they know the powerful people best, that they’re “friends”. How can someone be so confident that a founder is authentic, when a large part of their job is to make you believe so (regardless of whether they are), and the employee’s own self image push them to believe it too?
      • 44520297 6 hours ago
        >This guy even says they scour social media!

        Every, and I mean every, technology company scours social media. Amazon has a team that monitors social media posts to make sure employees, their spouses, their friends don’t leak info, for example.

    • rrrrrrrrrrrryan 1 day ago
      > There's no Bond villain at the helm. It's good people rationalizing things.

      I worked for a few years at a company that made software for casinos, and this was absolutely not the case there. Casinos absolutely have fully shameless villains at the helm.

      • stickfigure 18 hours ago
        Interesting. A year ago I joined one of the larger online sportsbook/casinos. In terms of talent, employees are all over the map (both good and bad). But I have yet to meet a villain. Everyone here is doing the best they can.
        • shermantanktop 18 hours ago
          Every villain wants to be the best villain they can be!

          More seriously, everyone is the hero of their own story, no matter how obvious their failings are from the outside.

          I’ve been burned by empathetically adopting someone’s worldview and only realizing later how messed up and self-serving it was.

        • yoyohello13 17 hours ago
          I’m sure people working for cigarette companies are doing the best they can too. People can be good individuals and also work toward evil ends.
          • stickfigure 16 hours ago
            I am of the opinion that the greatest evils come from the most self-righteous.
            • TeMPOraL 13 hours ago
              That may very well be the case. But I think this is a distinct category of evil; the second one, in which you'll find most of the cigarette and gambling businesses, is that of evil caused by indifference.

              "Yes, I agree there are some downsides to our product and there are some people suffering because of that - but no one is forcing them to buy from us, they're people with agency and free will, they can act as adults and choose not to buy. Now what is this talk about feedback loops and systemic effects? It's confusing, go away."

              This category is where you'll also find most of the advertising business.

              The self-righteous may be the source of the greatest evil by magnitude, but day-to-day, the indifferents make it up in volume.

              • rrrrrrrrrrrryan 3 hours ago
                It's not indifference, it's much more comically evil. Like, they're using software to identify gambling addicts on fixed incomes, to figure out how big retirees' social security checks are, and to ensure they lose the entire thing at the casino each week. They bonus out their marketing team for doing this successfully. They're using software to make sure that when a casino host's patron runs out of money and kills themselves, the casino host is not penalized but rewarded for a job well done.

                At 8am every morning, the executives walk across the casino floor on their way to the board room, past the depressed people who have been there gambling by themselves the entire night, seeing their faces, then they go into a boardroom to strategize ways to get them those people to gamble even harder. They brag about it. It's absolute pure villainy.

                • stickfigure 1 hour ago
                  I wouldn't know if this is a fair characterization of other companies, but it certainly isn't anything like what I observe here. If you can't name names, I'm going to guess you just made this up.
              • stickfigure 1 hour ago
                Some people like to smoke. I find it disgusting myself, but as long as people want the experience I see no reason why someone else shouldn't be allowed to sell it to them. See also alcohol, drugs, porn, motorcycles, experimental aircraft, whatever.

                We can have all sorts of interesting discussions about how to balance human independence with shared social costs, but it's not inherently "evil" to give consenting adults products and experiences they desire.

                IMO, much more evil is caused by busybodies trying to tell other people what's good for them. See: The Drug War.

            • ben_w 10 hours ago
              I disagree. The health burden from smoking is approximately the same as death toll as the sum of all in the Holocaust, but smoking does it every nine months. And 1.3 million/year of those are non-smokers who are dying because they are exposed to second-hand smoke: https://ourworldindata.org/smoking

              Even when the self-righteous are at their most dangerous, they have to be self-righteous and in power, e.g.:

                Caedite eos. Novit enim Dominus qui sunt eius.
              - https://en.wikipedia.org/wiki/Caedite_eos._Novit_enim_Dominu....

              or:

                រក្សាន្នកគ្មានប្រយោជន៍ខាត។ បំផ្លាញអ្នកគ្មានការខាតបង់
              - https://km.wikipedia.org/wiki/ប្រជាជនថ្មី
            • cootsnuck 14 hours ago
              I think y'all are agreeing.
              • taneq 2 hours ago
                Nah this is lawful evil (I Am Following The Rules Therefore I'm Doing The Right Thing) vs. neutral evil (I Just Work Here).
        • sanitycheck 10 hours ago
          There are jobs in which one may find oneself where doing them poorly is better for the world than doing them well.

          I think you and your colleagues should sit back and take it easy, maybe have a few beers every lunchtime, install some video games on the company PCs, anything you can get away with. Don't get fired (because then you'll be replaced by keen new hires), just do the minimum acceptable and feel good about that karma you're accumulating as a brake on evil.

      • lucianbr 13 hours ago
        > We are all very good and kind and not at all evil, trust us if we do say so ourselves

        Do these people have even minimal self-awareness?

      • darkmarmot 18 hours ago
        VGT?
    • Bratmon 1 day ago
      > It is fairly rare to see an ex-employee put a positive spin on their work experience

      Much more common for OpenAI, because you lose all your vested equity if you talk negatively about OpenAI after leaving.

      • rvz 1 day ago
        Absolutely correct.

        There is a reason why there was a cult-like behaviour on X amongst the employees in supporting to bringing back Sam as CEO when he was kicked out by the OpenAI board of directors at the time.

        "OpenAI is nothing without it's people"

        All of "AGI" (which actually was the lamborghinis, penthouses, villas and mansions for the employees) was all on the line and on hold if that equity went to 0 or would be denied selling their equity if they openly criticized OpenAI after they left.

        • tptacek 1 day ago
          Yes, and the reason for that is that employees at OpenAI believed (reasonably) that they were cruising for Google-scale windfall payouts from their equity over a relatively short time horizon, and that Altman and Brockman leaving OpenAI and landing at a well-funded competitor, coupled with OpenAI corporate management that publicly opposed commercialization of their technology, would torpedo those payouts.

          I'd have sounded cult-like too under those conditions (but I also don't believe AGI is a thing, so would not have a countervailing cult belief system to weigh against that behavior).

          • kaashif 23 hours ago
            > I also don't believe AGI is a thing

            Why not? I don't think we're anywhere close, but there are no physical limitations I can see that prevent AGI.

            It's not impossible in the same way our current understanding indicates FTL travel or time travel is.

            • Timwi 14 hours ago
              I also believe that AGI is not a thing, but for different reasons. I notice that almost everybody seems to implicitly assume, without justification, that humans are a GI (general intelligence). I think it's easy to see that if we are not a GI, then we can't see what we're missing, so it will feel like we might be GI when we're really not. People also don't seem interested in justifying why humans would be GI but other animals with 99% of the same DNA aren't.

              My main reason for thinking general intelligence is not a thing is similar to how Turing completeness is not a thing. You can conceptualize a Turing machine, but you can't actually build one for real. I think actual general intelligence would require an infinite brain.

              • Mordisquitos 11 hours ago
                > I notice that almost everybody seems to implicitly assume, without justification, that humans are a GI (general intelligence). I think it's easy to see that if we are not a GI, then we can't see what we're missing, so it will feel like we might be GI when we're really not.

                That's actually a great point which I'd never heard before. I agree that it's very likely that us humans do not really have GI, but rather only the intelligence that evolved stochastically to better favour our existence and reproduction, with all its positive and negative spandrels[0]. We can call that human intelligence (HI).

                However, even if our "general" intelligence is a mirage, surely what most people imagine when they talk about 'AGI' is actually AHI, as in an artificial intelligence that has the same characteristics as human intelligence that in their own hubris they believe is general. Or are you making a harder argument, that human intelligence may not actually have the ability to create AHI?

                [0] https://en.wikipedia.org/wiki/Spandrel_(biology)

            • smikhanov 21 hours ago
              If we were to believe the embodiment theory of intelligence (it’s by far not the only one out there, but very influential and convincing), this means that building an AGI is an equivalent problem to building an artificial human. Not a puppet, not a mock, not “sorta human”, but real, fully embodied human, down to gut bacterial biome, because according to the embodiment theory, this affects intelligence too.

              In this formulation, it’s pretty much as impossible as time travel, really.

              • TheDong 17 hours ago
                Sure, if we redefine "AGI" to mean "literally cloning a human biologically", then AGI suddenly is a very different problem (mainly one of ethics, since creating human clones, educating, brainwashing, and forcing them to respond to chat messages ala chatGPT has a couple ethical issues along the way).

                I don't see how claiming that intelligence is multi-faceted makes AGI (the A is 'artificial' remember) impossible.

                Even if _human_ intelligence requires eating yogurt for your gut biome, that doesn't preclude an artificial copy that's good enough.

                Like, a dog is very intelligent, a dog can fetch and shake hands because of years of breeding, training, and maybe from having a certain gut biome. Boston Dynamics did not have to understand a single cell of the dog's stomach lining in order to make dog-robots perfectly capable of fetching and shaking hands.

                I get that you're saying "yes, we've fully mapped the neurons of a fruit fly and can accurately simulate and predict how a fruit fly's brain's neurons will activate, and can create statistical analysis of fruit-fly behavior that lets us accurately predict their action for much cheaper even without the brain scan, but human brains are unique in a way where it is impossible to make any sort of simulation or prediction or facsimile that is 'good enough' because you also need to first take some bacteria from one of peter thiel's blood boys and shove it in the computer, and if we don't then we can't even begin to make a facsimile of intelligence". I just don't buy it.

                • lqstuart 15 hours ago
                  “AGI” isn’t a thing and never will be. It fails even really basic scrutiny. The objective function of a human being is to keep its biological body alive and reproduce. There is no such similar objective on which a ML algorithm can be trained. It’s frankly a stupid idea propagated by people with no meaningful connection to the field and no idea what the fuck they’re talking about.
                  • rvz 8 hours ago
                    We will look back on this and the early OpenAI employees (who sold) will speak out in documentaries and movies in a decades time and they will admit that "AGI" was a period of easy dumb money.
      • tedsanders 1 day ago
        OpenAI never enforced this, removed it, and admitted it was a big mistake. I work at OpenAI and I'm disappointed it happened but am glad they fixed it. It's no longer hanging over anyone's head, so it's probably inaccurate to suggest that Calvin's post is positive because he's trying to protect his equity from being taken. (though of course you could argue that everyone is biased to be positive about companies they own equity in, generally)
        • gwern 23 hours ago
          > It's no longer hanging over anyone's head,

          The tender offer limitations still are, last I heard.

          Sure, maybe OA can no longer cancel your vested equity for $0... but how valuable is (non-dividend-paying) equity you can't sell? (How do you even borrow against it, say?)

          • tedsanders 22 hours ago
            Nope, happy to report that was also fixed.

            (It would be a pretty fake solution if equity cancellation was halted, but equity could still be frozen. Cancelled and frozen are de facto identical until the first dividend payment, which could take decades.)

            • gwern 21 hours ago
              So OA PPUs can now be sold and transferred without restriction to arbitrary buyers, outside the tender offer windows?
              • tedsanders 19 hours ago
                No, that's still the same.
          • neonbjb 14 hours ago
            Also work at OpenAI. Every tender offer has made full payouts to previous employees. Sorry to ruin your witch hunt..
      • fragmede 1 day ago
        The Silenced No More Act" (SB 331), effective January 1, 2022, in California, where OpenAI is based, limits non-disparagement clauses and retribution by employers, likely making that illegal in California, but I am not a lawyer.
        • swat535 1 day ago
          Even if it's illegal, you'll have to fight them in court.

          OpenAI will certainly punish you for this and most likely make an example out of you, regardless of the outcome.

          The goal is corporate punishment, not the rule of the law.

    • torginus 1 day ago
      Here's what I think - while Altman was busy trying to convince the public the AGI was coming in the next two weeks, with vague tales that were equaly ominous and utopistic, he (and his fellow leaders) have been extremely busy at trying hard to turn OpenAI into a product company with some killer offerings, and from the article, it seems they were rather good and successful in that.

      Considering the high stakes, money, and undoubtedly the ego involved, the writer might have acquired a few bruises along the way, or might have lost out on some political in fights (remember how they mentioned they built multiple Codex prototypes, it must've sucked to see some other people's version chosen instead of your own).

      Another possible explanation is that the writer's just had enough - enough money to last a lifetime, just started a family, made his mark on the world, and was no longer compelled (or have been able to) keep up with methed-up fresh college grads.

      • matco11 1 day ago
        > remember how they mentioned they built multiple Codex prototypes, it must've sucked to see some other people's version chosen instead of your own

        Well it depends on people’s mindset. It’s like doing a hackathon and not winning. Most people still leave inspired by what they have seen other people building, and can’t wait to do it again.

        …but of course not everybody likes to go to hackathons

        • pyman 22 hours ago
          > OpenAI is perhaps the most frighteningly ambitious org I've ever seen.

          That kind of ambition feels like the result of Bill Gates pushing Altman to the limit and Altman rising to the challenge. The famous "Gates demo" during the GPT‑2 days comes to mind.

          Having said that, the entire article reads more like a puff piece than an honest reflection.

    • sensanaty 12 hours ago
      > There's no Bond villain at the helm

      We're talking about Sam Altman here, right, the dude behind Worldcoin? A literal bond-villainesque biological data harvesting scheme?

      • ben_w 11 hours ago
        It might be one of the cover stories for a Bond villain, but they have lots of mundane cover stories. Which isn't to say you're wrong, I've learned not to trust my gut in the category (rich business leaders) to which he belongs.

        I'd be more worried about the guy who tweeted “If this works, I’m treating myself to a volcano lair. It’s time.” and more recently wore a custom T-shirt that implies he's like Vito Corleone.

        • hnbad 9 hours ago
          > I'd be more worried about the guy

          Or you could realize what those guys all have in common and be worried about the systems that enable them because the problem isn't a guy but a system enabling those guys to become everyone's problem.

          I don't mind "Vito Corleone" joking about a volcano lair. I mind him interfering in my country's elections and politics. I shouldn't have to worry about the antics of a guy building rockets that explode and cars that can chop off your fingers because I live in a country that can protect me from those things becoming my problem, but because we have the same underlying systems I do have to worry about him because his political power is easily transferrable to any other country including mine.

          This would still be true if it were a different guy. Heck, Thiel is landing contracts with his surveillance tech in my country despite the foreign politics of the US making it an obvious national and economic security risk and don't get me started on Bezos - there's plenty of "guys" already.

          • ben_w 47 minutes ago
            Sure, but "the systems" were built by such people and are mere evolutions of the previous "I have a bigger stick" of power politics from prior to the industrial revolution.

            Not that you're wrong about the systems, just that if it was as easy as changing these systems because we can tell they're bad and allow corruption, the Enlightenment wouldn't have managed to mess up with both Smith and Marx.

    • teiferer 13 hours ago
      There is lots of rationalizing going on in his article.

      > I returned early from my paternity leave to help participate in the Codex launch.

      10 years from now, the significance of having participated in that launch will be ridiculously small (unless you tell yourself that it was a pivotal moment of your life, even if it objectively wasn't) versus those first weeks with your newborn will never come back. Kudos to your partner though.

      • baggachipz 8 hours ago
        The very fact that he did this exemplifies everything that is wrong about the tech industry and our current society. He's praising himself for this instead of showing remorse for his failure as a parent.
      • usaar333 7 hours ago
        Odd take. Openai gives 5 months of paternity leave and author is independently wealthy. What difference does it make between spending more time with a 4 month old vs a 4 year old? Or is your prescription that people should just be retiring once they have children?
    • Aurornis 20 hours ago
      > It is fairly rare to see an ex-employee put a positive spin on their work experience.

      The opposite is true: Most ex-employee stories are overly positive and avoid anything negative. They’re just not shared widely because they’re not interesting most of the time.

      I was at a company that turned into the most toxic place I had ever worked due to a CEO who decided to randomly get involved with projects, yell at people, and even fire some people on the spot.

      Yet a lot of people wrote glowing stories about their time at the company on blogs or LinkedIn because it was beneficial for their future job search.

      > It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.

      For the posts that make HN I rarely see it that way. The recent trend is for passionate employees who really wanted to make a company work to lament how sad it was that the company or department was failing.

      • eddythompson80 20 hours ago
        > The opposite is true: Most ex-employee stories are overly positive and avoid anything negative. They’re just not shared widely because they’re not interesting most of the time.

        Yeah I had to re-read the sentence.

        The positive "Farewell" post is indeed the norm. Especially so from well known, top level people in a company.

    • Wilder7977 15 hours ago
      Allow me to propose a different rationalization: "yes I know X might damage some people/society, but it was not me who decided, and I get lots of money to do it, which someone else would do if not me."

      I don't think people who work on products that spy on people, create addiction or worse are as naïve as you portrayed them.

    • Timwi 14 hours ago
      > everyone I met there is actually trying to do the right thing" - yes! That's true at almost every company that ends up making morally questionable decisions!

      The operative word is “trying”. You can “try” to do the right thing but find yourself restricted by various constraints. If an employee actually did the right thing (e.g. publish the weights of all their models, or shed light on how they were trained and on what), they get fired. If the CEO or similarly high-ranking exec actually did the right thing, the company would lose out on profits. So, rationalization is all they can do. “I'm trying to do the right thing, but.” “People don't see the big picture because they're not CEOs and don't understand the constraints.”

    • Spooky23 22 hours ago
      I’m not saying this about OpenAI, because I just don’t know. But Bond villains exist.

      Usually the level 1 people are just motivated by power and money to an unhealthy degree. The worst are true believers in something. Even something seemingly mild.

    • ben_w 1 day ago
      > It is fairly rare to see an ex-employee put a positive spin on their work experience.

      FWIW, I have positive experiences about many of my former employers. Not all of them, but many of them.

      • yen223 21 hours ago
        Same here. If I wrote an honest piece about my last employer, it would sound very similar in tone to what was written in this article
    • saghm 15 hours ago
      We already have bad guys doing X right now (literally, not the placeholders variable)
    • bigiain 22 hours ago
      > It is fairly rare to see an ex-employee put a positive spin on their work experience.

      Sure, but this bit really makes me wonder if I'd like to see what the writer is prepared to do to other people to get to his payday:

      "Nabeel Quereshi has an amazing post called Reflections on Palantir, where he ruminates on what made Palantir special. I wanted to do the same for OpenAI"

    • TeMPOraL 13 hours ago
      I agree with your points here, but I feel the need to address the final bit. This is not aimed personally at you, but at the pattern you described - specifically, at how it's all too often abused:

      > Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!

      Those are the easy cases, and correspondingly, you don't see much of those - or at least few are paying attention to companies talking like that. This is distinct from saying "X is going to directly benefit the society, and we're merely charging for it as fair compensation of our efforts, much like a baker charges you for the bread" or variants of it.

      This is much closer to what most tech companies try to argue, and the distinction seems to escape a lot of otherwise seemingly sharp people. In threads like this, I surprisingly often end up defending tech companies against such strawmen - because come on, if we want positive change, then making up a simpler but baseless problem, calling it out, and declaring victory, isn't helping to improve anything (but it sure does drive engagement on-line, making advertisers happy; a big part of why press does this too on a routine basis).

      And yes, this applies to this specific case of OpenAI as well. They're not claiming "LLMs are going to indirectly benefit the society because we're going to get rich off them, and then use that money to fund lots of nice things". They're just saying, "here, look at ChatGPT, we believe you'll find it useful, and we want to keep doing R&D in this direction, because we think it'll directly benefit society". They may be wrong about it, or they may even knowingly lie about those benefits - but this is not trickle-down economics v2.0, SaaS edition.

    • curious_cat_163 23 hours ago
      > It is fairly rare to see an ex-employee put a positive spin on their work experience.

      I liked my jobs and bosses!

    • iLoveOncall 15 hours ago
      Well, as a reminder OpenAI has a non disparagement clause in their contracts, so the only thing you'll ever see from former employees is positive feedback.
    • tptacek 1 day ago
      Most posts of the form "Reflections on [Former Employer]" on HN are positive.
    • vlovich123 20 hours ago
      > That's true at almost every company that ends up making morally questionable decisions! There's no Bond villain at the helm. It's good people rationalizing things

      I mean, that's a leap. There could be a bond villain that sets up incentives such that people who rationalize the way they want is who gets promoted / their voice amplified. Just because individual workers generally seem like they're trying to do the best thing doesn't mean the organization is set up specifically and intentionally to make certain kinds of "shady" decisions.

    • newswasboring 9 hours ago
      > It goes like this: we're the good guys. If we were evil, we could be doing things so much worse than X! Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!

      This is a great insight. But if we think a bit deeper about why that happens, I land on because there is nobody forcing anyone to do the right thing. Our governments and laws are geared more towards preventing people from doing the wrong thing, which of course can only be identified once someone has done the wrong thing and we can see the consequences and prove that it was indeed the wrong thing. Sometimes we fail to even do that.

    • energy123 19 hours ago

        > It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.
      
      It's also a performance art to acquire attention
  • humbleferret 1 day ago
    What a great post.

    Some points that stood out to me:

    - Progress is iterative and driven by a seemingly bottom up, meritocratic approach. Not a top down master plan. Essentially, good ideas can come from anywhere and leaders are promoted based on execution and quality of ideas, not political skill.

    - People seem empowered to build things without asking permission there, which seems like it leads to multiple parallel projects with the promising ones gaining resources.

    - People there have good intentions. Despite public criticism, they are genuinely trying to do the right thing and navigate the immense responsibility they hold.

    - Product is deeply influenced by public sentiment, or more bluntly, the company "runs on twitter vibes."

    - The sheer cost of GPUs changes everything. It is the single factor shaping financial and engineering priorities. The expense for computing power is so immense that it makes almost every other infrastructure cost a "rounding error."

    - I liked the take of the path to AGI being framed as a three horse race between OpenAI (consumer product DNA), Anthropic (business/enterprise DNA), and Google (infrastructure/data DNA), with each organisation's unique culture shaping its approach to AGI.

    • mikae1 23 hours ago
      > I liked the take of the path to AGI being framed as a three horse race between OpenAI (consumer product DNA), Anthropic (business/enterprise DNA), and Google (infrastructure/data DNA)

      Wouldn't want to forget Meta which also has consumer product DNA. They literally championed the act of making the consumer the product.

      • ceroxylon 7 hours ago
        Jokes aside, it was interesting to me that the 'three horse race' excluded a company who is announcing 5GW data centers the size of Manhattan[0].

        [0] https://techcrunch.com/2025/07/14/mark-zuckerberg-says-meta-...

      • spoaceman7777 20 hours ago
        And don't forget xAI, which has MechaHitler in its product DNA
      • smath 23 hours ago
        lol, I almost missed the sarcasm there :)
      • pyman 21 hours ago
        "Hey, Twitter vibes are a metric, so make sure to mention the company on Twitter if you want to be heard."

        Twitter is a one-way communication tool. I doubt they're using it to create a feedback loop with users, maybe just to analyse their sentiment after a release?

        The entire article reads more like a puff piece than an honest reflection. Those of us who live outside the US are more sceptical, especially after everything revealed about OpenAI in the book Empire of AI.

  • lz400 21 hours ago
    Engineers thinking they're building god is such a good marketing strategy. I can't overstate it. It's even difficult to be rational about it. I don't actually believe it's true, I think it's pure hype and LLMs won't even approximate AGI. But this idea is sort of half-immune to criticism or skepticism: you can always respond with "but what if it's true?". The stakes are so high that the potentially infinite payoff snowballs over any probabilities. 0.00001% multiplied by infinite is an infinite EV so you have to treat it like that. Best marketing, it writes itself.
    • maest 18 hours ago
      Similar to Pascal's wager, which pretty much amounts to "yeah, God is probably not real, _but what if it is_? The utility of getting into heaven is infinite (and hell is infinitely negative), so any non-zero probability that God is real should make you be religious, just in case."

      https://en.wikipedia.org/wiki/Pascal%27s_wager#Analysis_with...

      • ievans 7 hours ago
        This is explicitly not the conclusion Pascal drew with the wager, as described in the next section of the Wikipedia article: "Pascal's intent was not to provide an argument to convince atheists to believe, but (a) to show the fallacy of attempting to use logical reasoning to prove or disprove God..."
      • billmcneale 15 hours ago
        I am convinced!

        Which god should I believe in, though? There are so many.

        And what if I pick the wrong god?

      • adamgordonbell 8 hours ago
        See also Pascal's mugging, from Eliezer Yudkowsky. Some would say AI Safety research is a form of Pascal's mugging.

        https://en.wikipedia.org/wiki/Pascal%27s_mugging

    • tim333 3 hours ago
      I know you're not being serious but building AGI as in something that thinks like a human, as proven possible by millions of humans wandering all over the place is very different from "building god".
      • tartoran 42 minutes ago
        Except that humans cannot read millions of books (if not all books ever published) and keep track of massive amounts of information. AGI presuposes some kind of super human capabilities that no one human has. Whether that's ever accomplished remains to be seen, I personally am a bit skeptical that it will hapen in our lifetime but think it's possible in the future.
      • lz400 19 minutes ago
        Not sure about that one. I do agree with the AI bros that, _if_ we build AGI, ASI looks inevitable shortly after, at least a "soft ASI". Because something with the agency of a human but all the knowledge of the world at its fingertips, the ability to replicate itself, think at order of magnitudes faster and paralelly on many things at the same time and modify itself... really looks like it won't stay comparable to a baseline human for long.
    • uh_uh 13 hours ago
      > I don't actually believe it's true, I think it's pure hype and LLMs won't even approximate AGI.

      Not sure how you can say this so confidently. Many would argue they're already pretty close, at least on a short time horizon.

      • J_Shelby_J 1 hour ago
        Many would argue that you should give them a billion dollars funding, and that’s what they’re doing when they say AGI is close.

        There is a decade + worth of implementation details and new techniques to invent before we have something functionally equivalent to Jarvis.

      • lz400 15 minutes ago
        I mean, they're wrong? LLMs don't have agency, don't learn, don't do anything except react to prompts really.
    • pyman 21 hours ago
      100%
    • ivape 21 hours ago
      "but what if it's true?"

      There was nothing hypothesized about next-token prediction and emergent properties (they didn't know scale would allow it to generalize for sure). What if it's true is part of LLMs story, there is a mystical element here.

      • echoangle 21 hours ago
        > There was nothing hypothesized that next-token prediction and scale could show emergent properties.

        Nobody ever hypothesized it before it happened? Hard to believe.

        • ivape 20 hours ago
          Someone else can confirm, but from my understanding, no they did not know sentiment analysis, reasoning, few shot learning, chain of thought, etc would emerge at scale. Sentiment analysis was one of the first things they noticed a scaled up model could generalize. Remember, all they were trying to do was get better at next-token prediction, there was no concrete idea to achieve "instruction following", for example. We can never truly say going up another order of magnitude on the number of params won't achieve something (it could, for reasons unknown, just like before).

          It is somewhat parallel to the story of Columbus looking for India but ending up in America.

          • ZYbCRq22HbJ2y7 18 hours ago
            > sentiment analysis, reasoning, few shot learning, chain of thought, etc would emerge at scale

            Some would say it still hasn't (to an agreeable level).

          • didibus 17 hours ago
            Didn't it just get better at next token prediction? I don't think anything emerged in the model itself, what was surprising is how really good next token prediction itself is at predicting all kind of other things no?
          • lossolo 18 hours ago
            The Schaeffer et al. "Mirage" paper showed that many claimed emergent abilities disappear when you use different metrics, what looked like sudden capability jumps were often artifacts of using harsh/discontinuous measurements rather than smooth ones.

            But I'd go further: even abilities that do appear "emergent" often aren't that mysterious when you consider the training data. Take instruction following - it seems magical that models can suddenly follow instructions they weren't explicitly trained for, but modern LLMs are trained on massive instruction-following datasets (RLHF, constitutional AI, etc.). The model is literally predicting what it was trained on. Same with chain-of-thought reasoning - these models have seen millions of examples of step-by-step reasoning in their training data.

            The real question isn't whether these abilities are "emergent" but whether we're measuring the right things and being honest about what our training data contains. A lot of seemingly surprising capabilities become much less surprising when you audit what was actually in the training corpus.

  • bhl 1 day ago
    > The Codex sprint was probably the hardest I've worked in nearly a decade. Most nights were up until 11 or midnight. Waking up to a newborn at 5:30 every morning. Heading to the office again at 7a. Working most weekends.

    There's so much compression / time-dilation in the industry: large projects are pushed out and released in weeks; careers are made in months.

    Worried about how sustainable this is for its people, given the risk of burnout.

    • alwa 1 day ago
      If anyone tried to demand that I work that way, I’d say absolutely not.

      But when I sink my teeth into something interesting and important (to me) for a few weeks’ or months’ nonstop sprint, I’d say no to anyone trying to rein me in, too!

      Speaking only for myself, I can recognize those kinds of projects as they first start to make my mind twitch. I know ahead of time that I’ll have no gas left the tank by the end, and I plan accordingly.

      Luckily I’ve found a community who relate to the world and each other that way too. Often those projects aren’t materially rewarding, but the few that are (combined with very modest material needs) sustain the others.

      • ishita159 1 day ago
        I think senior folks at OpenAI realized this is not sustainable and hence took the "wellness week".
      • bradyriddle 1 day ago
        I'd be curious to know about this community. Is this a formal group or just the people that you've collected throughout your life?
        • alwa 1 day ago
          The latter. I mean, I feel like a disproportionate number of folks who hang around here have that kind of disposition.

          That just turns out to be the kind of person who likes to be around me, and I around them. It’s something I wish I had been more deliberate about cultivating earlier in my life, but not the sort of thing I regret.

          In my case that’s a lot of artists/writers/hackers, a fair number of clergy, and people working in service to others. People quietly doing cool stuff in boring or difficult places… people whose all-out sprints result in ambiguity or failure at least as often as they do success. Very few rich people, very few who seek recognition.

          The flip side is that neither I nor my social circles are all that good at consistency—but we all kind of expect and tolerate that about each other. And there’s lots of “normal” stuff I’m not part of, which I probably could have been if I had tried. I don’t know what that means to the business-minded people around here, but I imagine it includes things like corporate and nonprofit boards, attending sports events in stadia, whatever golf people do, retail politics, Society Clubs For Respectable People, “Summering,” owning rich people stuff like a house or a car—which is fine with me!

          More than enough is too much :)

      • ZYbCRq22HbJ2y7 18 hours ago
        I think any reasonable manager would appreciate that sort of interest in a project and would support it, not demand it.
        • alwa 17 hours ago
          And, at least if I'm your manager, zealously defend the sanctity of your recovery time afterward.
          • fsckboy 5 hours ago
            I don't need recovery time afterward (apart from sleep), but when I'm surrounded by people who do I want some equivalent compensation, not because I feel I need it, but because I feel they are slackers (not saying they are objectively slackers, just saying that's how it feels to me). many compromises need to be made when sprinting all out, and in the aftermath what is restorative to me is cleaning up the technical debt while it's fresh in my mind and I can't understand that other people don't want to do the same thing.
    • ml-anon 21 hours ago
      This guy who is already independently wealthy chose working 16-17h 7 days a week instead of raising his newborn child and thanks his partner for “childcare duties”. Pretty much tells you everything you need to know.
      • yawnr 7 hours ago
        Yeah as someone who has a young child this entire post made me feel like I was taking crazy pills. Working this much with a newborn is toxic behavior and if a company demands it then it is toxic culture. And writing about it as anything but that feels like some combination of Stockholm syndrome, being a workaholic, and marketing spin.

        Being passionate about something and giving yourself to a project can be amazing, but you need to have the bandwidth to do it without the people you care about suffering because of that choice.

        • ml-anon 6 hours ago
          >>Working this much with a newborn is toxic behavior and if a company demands it then it is toxic culture. And writing about it as anything but that feels like some combination of Stockholm syndrome, being a workaholic, and marketing spin.

          i.e. Silicon Valley "grind culture".

    • tptacek 1 day ago
      It's not sustainable, at all, but if it's happening just a couple times throughout your career, it's doable; I know people who went through that process, at that company, and came out of it energized.
    • 6gvONxR4sf7o 1 day ago
      I couldn't imagine asking my partner to pick up that kind of childcare slack. Props to OP's wife for doing so, and I'm glad she got the callout at the end, but god damn.
    • maxnevermind 19 hours ago
      I think Altman said in Lex F. podcast that he works 8 hours, 4 first one being the most productive ones and he doesn't believe CEO claiming they work 16 hours a day. Weird contrast to what described in the article. This confirms my theory that there are two types of people in startups: founders and everybody else, the former are there to potentially make a lot of money, and the later are there to learn and leave.
    • datadrivenangel 1 day ago
      The author left after 14 months at OpenAI, so that seems like a burnout duration.
      • pyman 21 hours ago
        It's worst than that. Lots of power struggles and god-like egos. Altman called one of the employees "Einstein" on Twitter, some think they were chosen to transcend humanity, others believe they're at war with China, some want to save the world, others see it burn, and some just want their names up there with Gates and Jobs.

        This is what ex-employees said in Empire of AI, and it's the reason Amodei and Kaplan left OpenAI to start Anthropic.

      • fhub 16 hours ago
        He references childcare and paternity leave in the post and he was a co-founder in a $3B acquisition. To me it seems it is a time-of-life/priorities decision not a straight up burnout decision.
    • kaashif 23 hours ago
      Working a job like that would literally ruin my life. There's no way I could have time to be a good husband and father under those conditions, some things should not be sacrificed.
    • Rebelgecko 1 day ago
      How did they have any time left to be a parent?
      • ambicapter 1 day ago
        > I returned early from my paternity leave to help participate in the Codex launch.

        Obvious priorities there.

        • harmonic18374 1 day ago
          That part made me do a double take. I hope his child never learns they were being put second.
          • adastra22 15 hours ago
            It's just a google search away.
          • kadushka 20 hours ago
            Many people are bad parents. Many are bad at their jobs. Many at bad at both. At least this guy is good at his job, and can provide very well for his family.
            • adastra22 15 hours ago
              If you think being good at your job is providing for your family, you've been raised with some bad parenting examples.
            • achierius 16 hours ago
              It'll be of little comfort to the kid.
              • ZYbCRq22HbJ2y7 14 hours ago
                It is all relative. A workaholic seems pretty nice when compared to growing up with actual objectively bad parents, workaholics plus: addicts, perpetually drunk, gamblers, in jail, no shows for everything you put time into, competing with you when obtaining basic skills, abusing you for being a kid, etc.

                There are plenty worse than that. The storied dramatic fiction parent missing out on a kid's life is much better than what a lot of children have.

                Yet, all kids grow up, and the greatest factor determining their overall well-being through life is socioeconomic status, not how many hours a father was present.

                • dm270 12 hours ago
                  Im very interested in that topic and haven’t made up my mind about what really counts in parenting. You have sources for the claim about well-being (asking explicitly about mental well-being and not just material well-being) being more influenced by socioeconomic status and not so much by parental absence?

                  About the guy: I think if it’s just a one time thing it’s ok but the way he presents himself gives reason for doubt

                  • kadushka 2 hours ago
                    A parent should provide their kids with opportunities to try new things. Sometimes this might require gently making a kid do something at least a few times until it's clear it's not something they are good at or interested in. Also deciding when to try something is important - kids might need to try it at different ages. And of course convincing and reassuring a kid might be necessary to try something they are afraid to do. Until the age of 12 or so, it's important to make it fun, at least initially.

                    It's debatable whether a parent always needs to "lead by example": for example, I've never played hockey, but I introduced my son to it, and he played for a while (until injuries made us reconsider and he stopped). For mental well-being, make sure to not display your worst emotions in front of your kids - they will definitely notice, and will probably carry it for the rest of their lives.

            • davidcbc 3 hours ago
              This is why the children of rich people are famously well adjusted... /s
      • ZYbCRq22HbJ2y7 18 hours ago
        They were showered with assets for being a lucky individual in a capital driven society, time is interchangeable for wealth, as evidenced throughout history.

        This guy is young. He can experience all that again, if it is that much of a failure, and he really wants to.

        Sure, there are ethical issues here, but really, they can be offset by restitution, lets be honest.

        • adastra22 15 hours ago
          He cannot experience time with his kid again. In any case he's on a fast track to divorce rn.
          • kridsdale1 1 hour ago
            What a fabulous payout she is in for.
      • baggachipz 8 hours ago
        How did they have any time to create the child in the first place?
    • sashank_1509 1 day ago
      My hot take is I don’t think burn out has much to do with raw hours spent working. I feel it has a lot more to do with sense of momentum and autonomy. You can work extremely hard 100 hour weeks six months in a row, in the right team and still feel highly energized at the end of it. But if it feels like wading through a swamp, you will burn out very quickly, even if it’s just 50 hours a week. I also find ownership has a lot to do with sense of burnout
      • matwood 1 day ago
        And if the work you're doing feels meaningful and you're properly compensated. Ask people to work really hard to fill out their 360 reviews and they should rightly laugh at you.
      • ip26 14 hours ago
        At some level of raw hours, your health and personal relationships outside work both begin to wither, because there are only 24 hours in a day. That doesn’t always cause burnout, but it provides high contrast - what you are sacrificing.
        • antupis 12 hours ago
          Yup, the yearly average should be that 35-45 hours per week, but sprinting is fine if opportunity is there.
      • catoc 15 hours ago
        Exactly this - if not at all about hours spent (at least that’s not a good metric; working less will benefit a burned out person; but the hours were not the root cause). The problem is lack of autonomy, lack of control over things you care about deeply. If those go out the window, the fire burns out quickly. Imho when this happens it’s usually because a company becomes too big, and the people in control lack subject matter expertise, have lost contact with the people that drive the company, and instead are guided by KPIs and the rules they enforced grasping for that feeling of being in control.
      • parpfish 1 day ago
        i hope thats not a hot take because it's 100% correct.

        people conflate the terms "burnout" and "overwork" because they seem semantically similar, but they are very different.

        you can fix overwork with a vacation. burnout is a deeper existential wound.

        my worst bout of burnout actually came in a cushy job where i was consistently underworked but felt no autonomy or sense of purpose for why we were doing the things we were doing.

      • petesergeant 13 hours ago
        2024 my wife and I did a startup together. We worked almost every hour we were awake, 16-18 hours a day, 7 days a week. We ate, we went for an hour's walk a day, the rest of the time I was programming. For 9 months. Never worked so hard in my life before. And, not a lick of burnout during that time, not a moment of it, where I've been burned out by 6 hour work days at other organizations. If you're energized by something, I think that protects you from burnout.
      • apwell23 1 day ago
        > You can work extremely hard 100 hour weeks six months in a row, in the right team and still feel highly energized at the end of it.

        Something about youth being wasted on young.

    • suncemoje 1 day ago
      I’m sure they’ll look back at it and smile, no?
    • ojr 1 day ago
      for the amount of money they are giving that is relatively easy, normal people are paid way less in harder jobs, for example, working in an Amazon Warehouse or doing door-to-door sales, etc.
    • laidoffamazon 22 hours ago
      I don't really have an opinion on working that much, but working that much and having to go into the office to spend those long hours sounds like torture.
    • bongripper 1 day ago
      [dead]
    • beebmam 1 day ago
      Those that love the work they do don't burn out, because every moment working on their projects tends to be joyful. I personally hate working with people who hate the work they do, and I look forward to them being burned out
      • procinct 1 day ago
        Sure, but this schedule is like, maybe 5 hours of sleep per night. Other than an extreme minority of people, there’s no way you can be operating on that for long and doing your best work. A good 8 hours per night will make most people a better engineer and a better person to be around.
      • chrisfosterelli 1 day ago
        "You don't really love what you do unless you're willing to do it 17 hours a day every day" is an interesting take.

        You can love what you do but if you do more of it than is sustainable because of external pressures then you will burn out. Enjoying your work is not a vaccine against burnout. I'd actually argue that people who love what they do are more likely to have trouble finding that balance. The person who hates what they do usually can't be motivated to do more than the minimum required of them.

        • threetonesun 1 day ago
          Weird how we went from like the 4 hour workweek and all those charts about how people historically famous in their field spent only a few hours a day on what they were most famous for, to "work 12+ hours a day or you're useless".

          Also this is one of a few examples I've read lately of "oh look at all this hard work I did", ignoring that they had a newborn and someone else actually did all of the hard work.

        • alwa 1 day ago
          I read gp’s formulation differently: “if you’re working 17 hours a day, you’d better stop soon unless you’re doing it for the love of doing it.” In that sense it seems like you and gp might agree that it’s bad for you and for your coworkers if you’re working like that because of external pressures.

          I don’t delight in anybody’s suffering or burnout. But I do feel relief when somebody is suffering from the pace or intensity, and alleviates their suffering by striking a more sustainable balance for them.

          I feel like even people energized by efforts like that pay the piper: after such a period I for one “lay fallow”—tending to extended family and community, doing phone-it-in “day job” stuff, being in nature—for almost as long as the creative binge itself lasted.

          • chrisfosterelli 1 day ago
            I would indeed agree with things as you've stated. I interpreted "the work they do" to mean "their craft" but if it was intended as "their specific working conditions" I can see how it'd read differently.

            I think there are a lot of people that love their craft but are in specific working conditions that lead to burnout, and all I was saying is that I don't think it means they love their craft any less.

    • rvz 1 day ago
      > Worried about how sustainable this is for its people, given the risk of burnout.

      Well given the amount of money OpenAI pays their engineers, this is what it comes with. It tells you that this is not a daycare or for coasters or for the faint of heart, especially at a startup at the epicenter of AI competition.

      There is now a massive queue of lots of desperate 'software engineers' ready to kill for a job at OpenAI and will not tolerate the word "burnout" and might even work 24 hours to keep the job away from others.

      For those who love what they do, the word "burnout" doesn't exist for them.

      • cylemons 12 hours ago
        For these prestigious companies it makes sense, work hard for a few years then retire early.
    • babelfish 1 day ago
      This is what being a wartime company looks like
    • lvl155 1 day ago
      I am not saying that’s easy work but most motivated people do this. And if you’re conscious of this that probably means you viewed it more as a job than your calling.
  • a_bonobo 15 hours ago
    >Thanks to this bottoms-up culture, OpenAI is also very meritocratic. Historically, leaders in the company are promoted primarily based upon their ability to have good ideas and then execute upon them. Many leaders who were incredibly competent weren't very good at things like presenting at all-hands or political maneuvering. That matters less at OpenAI then it might at other companies. The best ideas do tend to win. 2

    This sets off my red flags: companies that say they are meritocratic, flat etc., often have invisible structures that favor the majority. Valve Corp is a famous example for that where this leads to many problems, see https://www.pcgamer.com/valves-unusual-corporate-structure-c...

    >It sounds like a wonderful place to work, free from hierarchy and bureaucracy. However, according to a new video by People Make Games (a channel dedicated to investigative game journalism created by Chris Bratt and Anni Sayers), Valve employees, both former and current, say it's resulted in a workplace two of them compared to The Lord of The Flies.

    • darkoob12 13 hours ago
      I think in this structure people only think locally and they are not concerned with the overall mission of the company and do not actively think about morality of the mission or if they are following it.
      • DrewADesign 3 hours ago
        In my experience, front-line and middle managers will penalize workers that stray from their explicit goals because they think something else more readily contributes to the company’s mission.
      • samultio 9 hours ago
        Kind of sounds like a traditional public company is a constitutional monarchy, not always the best but at least there's a balance of interests. While a private company could either be an autocracy or oligarchy where sucking up and playing tribal politics is the only way to survive.
        • mhog_hn 9 hours ago
          Anyone tried setting up a modestly sized tech company where employees are randomly placed into various seniority roles at the start of each year? Of course considering capabilities and some business continuity concerns…

          Could work with a bunch of similarly skilled people in a narrow niche

    • mcosta 13 hours ago
      Are you implying that a top-down corporate structure is better?
      • NicuCalcea 10 hours ago
        If not better, certainly more honest.
  • dowager_dan99 5 hours ago
    Wild that OpenAI is changing so much that you can post about how things have radically changed in a year, and consider yourself a long-timer after < 16 months. I'm highly skeptical that an org this big is based on merit and there wasn't a lot of political maneuvering. You can have public politics or private politics, but no politics doesn't exist - at least after you hit <some> number of people where "some" is definitely < the size of OpenAI. All I hear about OpenAI is politics these days,
  • tptacek 1 day ago
    This was good, but the one thing I most wanted to know about what it's like building new products inside of OpenAI is how and how much LLMs are involved in their building process.
    • edanm 10 hours ago
      Yes, same, that's a fascinating question that people are pretty tight-lipped about.

      Note he was specifically on the team that was launching OpenAI's version of a coding agent, so I imagine the numbers before that product existed could be very different to the numbers after.

    • girvo 14 hours ago
      Same! I was really hoping it was discussed. I’m assuming “lots, but it depends on what you’re working on”?
    • wilkomm 1 day ago
      That's a good question!
    • vFunct 1 day ago
      He describes 78,000 public pull requests per engineer over 53 days. LMAO. So it's likely 99.99% LLM written.

      Lots of good info in the post, surprised he was able to share so much publicly. I would have kept most of the business process info secret.

      Edit: NVM. That 78k pull requests is for all users of Codex, not all engineers of Codex.

  • fogbeak 4 hours ago
    Absolutely hilarious to assert that "everyone at OpenAI is trying to do the right thing" and then compare it to Los Alamos, the creators of the nuclear bomb.
    • thinkingtoilet 2 hours ago
      Well, no one accused wealthy tech bros of being in touch.
  • chvid 15 hours ago
    This stuff:

    - The company was a little over 1,000 people. One year later, it is over 3,000.

    - Changes direction on a dime.

    - Very secretive place.

    With the added "everything is a rounding error compared to GPU cost" and "this creates a lot of strange-looking code because there are so many ways you can write Python".

    Is not something that is going to last.

  • retornam 17 hours ago
    Doesn't it bother anybody that their product heavily relies on FastAPI according to this post yet they haven't donated to the project or aren't listed as sponsors?

    https://github.com/sponsors/tiangolo#sponsors

    https://github.com/fastapi/fastapi?tab=readme-ov-file#sponso...

    • senko 14 hours ago
      Presumably it also relies on Python, Linux, nginx, coreutils and a bunch of other stuff they haven't donated to.
    • PunchTornado 12 hours ago
      no, because I wouldn't expect anything good from openai.
  • vonneumannstan 1 day ago
    >Safety is actually more of a thing than you might guess

    Considering all the people who led the different safety teams have left or been fired, Superalignment has been a total bust and the various accounts from other employees about the lack of support for safety work I find this statement incredibly out of touch and borderline intentionally misleading.

  • csomar 19 hours ago
    > Good ideas can come from anywhere, and it's often not really clear which ideas will prove most fruitful ahead of time.

    Is that why they have a dozens of different models?

    > Many leaders who were incredibly competent weren't very good at things like presenting at all-hands or political maneuvering.

    I don't think the Sam/Board drama confirms this.

    > The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement. Anybody in the world can jump onto ChatGPT and get an answer, even if they aren't logged in.

    Did you thank your OpenAI overlords for letting you access their sacred latest models?

    +-+-

    This reads like an Ad for Open AI or an attempt by the author to court them again? I am not sure how anyone can take his words seriously.

  • theletterf 1 day ago
    For a company that has grown so much in such a short time, I continue to be surprised by its lack of technical writers. Saying docs could be better is an euphemism, but I still can't find fellow tech writers working there. Compare this with Anthropic and its documentation.

    I don't know what's the rationale for not hiring tech writers other than nobody suggesting it yet, which is sad. Great dev tools require great docs, and great docs require teams that own them and grow them as a product.

    • mlinhares 1 day ago
      The higher ups don't think there's value in that. Back at DigitalOcean they had an amazing tech writing team, with people with years of experience, doing some of the best tech docs in the industry, when the layoffs started the writing team was the first to be cut.

      People look at it as a cost a and nothing else.

      • nikcub 17 hours ago
        I didn't realise that team at DO was let go, what a horrible decision - the SERP footprint of DO was immense and the quality of the content was fantastic.
      • rs186 5 hours ago
        Really makes me worry about the future of MDN docs.
    • csomar 19 hours ago
      I think he explained it in his post? You get rewarded for "actions" which is making cool things and stuff. You don't get rewarded for writing docs.
  • simonw 1 day ago
    Whoa, there is a ton of interesting stuff in this one, and plenty of information I've never seen shared before. Worth spending some time with it.
  • jjani 1 day ago
    > The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement. Anybody in the world can jump onto ChatGPT and get an answer, even if they aren't logged in. There's an API you can sign up and use–and most of the models (even if SOTA or proprietary) tend to quickly make it into the API for startups to use.

    The comparison here should clearly be with the other frontier model providers: Anthropic, Google, and potentially Deepseek and xAI.

    Comparing them gives the exact opposite conclusion - OpenAI is the only model provider that gates API access to their frontier models behind draconic identity verification (also, Worldcoin anyone?). Anthropic and Google do not do this.

    OpenAI hides their model's CoT (inference-time compute, thinking). Anthropic to this day shows their CoT on all of their models.

    Making it pretty obvious this is just someone patting themselves on the back and doing some marketing.

    • harmonic18374 1 day ago
      Yes, also OpenAI being this great nimble startup that can turn on a dime, while in reality Google reacted to them and has now surpassed them technically in every area, except image prompt adherence.
    • tbcj 18 hours ago
      Anthropic has banned my own accounts before I used them for violating ToS. Appeals do nothing. Only when I started using a Google login did they stop banning them. This isn’t an OpenAI-only problem.
    • pyman 21 hours ago
      There are only two hard things in Computer Science: cache invalidation and naming things:

      CloseAI.

    • ivape 20 hours ago
      OpenAI hides their model's CoT (inference-time compute, thinking)

      Probably because Deepseek trained a student model off their frontier model.

      • jjani 18 hours ago
        And the same thing could very easily happen to Anthropic, yet they choose not to hide it.
    • levzzz 1 day ago
      [dead]
  • daxfohl 3 hours ago
    What do people not like about Azure IAM? That's the one I'm most familiar with, and I've always thought it was decent, pretty vanilla.

    When I go to AWS it looks similar except role assignments can't be scoped, so needs more duplication and maintenance. In that way Azure seems nicer. In everything else, it seems pretty equivalent.

    But I see it catching flak occasionally on HN, so curious what others dislike.

  • fidotron 1 day ago
    > There's a corollary here–most research gets done by nerd-sniping a researcher into a particular problem. If something is considered boring or 'solved', it probably won't get worked on.

    This is a very interesting nugget, and if accurate this could become their Achilles heel.

    • ACCount36 23 hours ago
      It's not "their" Achilles heel. It's the Achilles heel of the way humans work.

      Most top-of-their-field researchers are on top of their field because they really love it, and are willing to sink insane amount of hours into doing things they love.

  • motbus3 13 hours ago
    I didn't find any surprises reading this post.

    If anything about OpenAI which should bothers people is how they fake to be blind to the consequences because of "the race". Leveraging the decision IF and WHAT should be done to the top heads only never worked well.

  • JonathanRaines 1 day ago
    Fascinating that you chose to compare OpenAI's culture to Los Alamos. I can't tell if you're hinting AI is as world ending as nuclear weapons or not.
  • nilirl 16 hours ago
    Are any of these causal to OpenAI's success? Or are they incidental? You can throw all of this "culture" into an org but I doubt it'd do anything without the literal world-changing technology the company owns.
  • frankfrank13 1 day ago
    > As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing

    I doubt many people would say something contrary to this about their (former) colleagues, which means we should always take this with a (large) grain of salt.

    Do I think (most) AT&T employees wanted to let the NSA spy on us? Probably not. Google engineers and ICE? Palantir and.. well idk i think everyone there knows what Palantir does.

  • segalord 14 hours ago
    > The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement.

    That is literally how openAI gets data for fine-tuning it's models, by testing it on real users and letting them supply data and use cases. (tool calling, computer use, thinking, all of these were championed by people outside and they had the data)

  • paxys 1 day ago
    > An unusual part of OpenAI is that everything, and I mean everything, runs on Slack.

    Not that unusual nowadays. I'd wager every tech company founded in the last ~10 years works this way. And many of the older ones have moved off email as well.

    • david_shi 22 hours ago
      I wonder how much of this data Salesforce can use, a literal goldmine of information
      • denimnerd42 20 hours ago
        isnt there a clause in the slack contract you cant use the slack api to pull data to train an AI
        • david_shi 15 hours ago
          rules for thee, not for me
  • dcreater 23 hours ago
    This is silicon valley culture on steroids: I really have to question if it is positive for any involved party. Codex almost has no mindshare and rightly so. It's a textbook also ran, except it came from the most dominant player and was outpaced by Claude code on the order of weeks.

    Why go through all that? Instead what would have been a much better scenario is openai carefully assessing different approaches to agentic coding and releasing a more fully baked product with solid differentiation. Even Amazon just did that with Kiro

    • energy123 16 hours ago
      What I read in this blogpost is a description of how every good research organization works, from academia to private labs. The command and control, centrally planned approach doesn't work.
    • danenania 17 hours ago
      Codex is quite different from Claude Code. It’s more similar to Devin.

      Maybe you’re thinking of the confusingly named Codex CLI?

  • jordanmorgan10 23 hours ago
    I’m at a point my life and career where I’d never entertain working those hours. Missed basketball games, seeing kids come home from school, etc. I do think when I first started out, and had no kiddos, maybe some crazy sprints like that would’ve been exhilarating. No chance now though
    • chribcirio 22 hours ago
      > I’m at a point my life and career where I’d never entertain working those hours.

      That’s ok.

      Just don’t complain about the cost of daycare, private school tuition, or your parents senior home/medical bills.

      • metaltyphoon 17 hours ago
        > Just don’t complain about the cost of daycare, private school tuition, or your parents senior home/medical bills.

        How does any of this relates to the amount of hours one works?

  • noname120 4 hours ago
    Does Sa… uh OpenAI still do stock clawbacks from employees who say negative things about the company after leaving?
  • gsf_emergency_2 21 hours ago
    If you'd like some "objective" insights into how bottoms-up innovation at OpenAI works..

    a research manager there coauthored this under-hyped book: https://engineeringideas.substack.com/p/review-of-why-greatn...

  • nembal 1 day ago
    wham. thanks for sharing anecdotal episodes from OAI's inner mecahnism from an eng perspective. I wonder if OAI wouldn't be married to Azure would the infra be more resilient, require less eng effort to invent things to just run (at scale).

    What i haven't seen much is the split between eng and research and how people within the company are thinking about AGI and the future, workforce, etc. Is it the usual SF wonderland or is there an OAI specific value alignment once someone is working there.

  • ThouYS 1 day ago
    These one or two year tenures.. I don't know man
  • LZ_Khan 1 day ago
    This is just the exact same culture as Deepmind minus the "everything on Slack" bulletpoint.
    • xmasotto 14 hours ago
      Surely one wouldn't complain about infra at deepmind?
  • david_shi 22 hours ago
    Python monorepo is the biggest surprise in this whole article
    • nharada 2 hours ago
      Yeah legit interested in their toolchain. I tried Pants and had a bad experience. Bazel is too heavyweight imo and doesn't deal with a variety of dependencies well.
    • menaerus 12 hours ago
      Chunking the codebase that you entirely own into packages is as if you're intentionally wanting to make your life miserable by imposing the same kind of volatility that you would otherwise find in the development process of building the Linux distribution. It's a misnomer.
  • sebslomski 13 hours ago
    Good writing, enjoyed that article. Also I guess it looks like there was more time spent writing this article than actually working at OpenAI? 1 year tenure and a paternity leave?
  • troupo 1 day ago
    > As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing.

    To quote Jonathan Nightingale from his famous thread on how Google sabotaged Mozilla [1]:

    --- start quote ---

    The question is not whether individual sidewalk labs people have pure motives. I know some of them, just like I know plenty on the Chrome team. They’re great people. But focus on the behaviour of the organism as a whole. At the macro level, google/alphabet is very intentional.

    --- end quote ---

    Replace that with OpenAI

    [1] https://archive.is/2019.04.15-165942/https://twitter.com/joh...

  • upghost 1 day ago
    Granted the "OpenAI is not a monolith" comment, interesting that use of AI assisted coding was a curious omission from the article -- no mention if encouraged or discouraged.
  • breadwinner 19 hours ago
    > What's funny about this is there are exactly three services that I would consider trustworthy: Azure Kubernetes Service, CosmosDB (Azure's document storage), and BlobStore.

    CosmosDB is trustworthy? Everyone I know that used CosmosDB ended up rewriting their code because of throttling.

    • daxfohl 2 hours ago
      It's pretty heavily used within Azure itself, and my old team didn't have any issues, even though we had a high volume service with stored procs, triggers, list indexes, etc. And it was cheaper and faster than the SQL Server instance it replaced (granted, the original Sql-based app had tons of big joins and wasn't designed to scale).

      I know a few other teams when I was working elsewhere that had to migrate off Spanner due to throttling and random hiccups, though if Google uses it for zanzibar, they must not have that problem internally. Maybe all these companies increase throttling limits for first party use cases.

      My current team uses dynamo, which has also given throttling issues, but generally only when they try to do things that it's not designed for (bulk updating a bunch of records in the same partition). Other than that, it seems reliable (incredibly consistent low latencies) and a bit cheaper than my experience with cosmos, though with fewer features.

      They all seem to have their own pros and cons in my experience.

  • viccis 23 hours ago
    >It's hard to imagine building anything as impactful as AGI

    >...

    >OpenAI is also a more serious place than you might expect, in part because the stakes feel really high. On the one hand, there's the goal of building AGI–which means there is a lot to get right.

    I'm kind of surprised people are still drinking this AGI Koolaid

    • windowshopping 16 hours ago
      for real. same. the level of delusion. i think what'll happen is they'll get some really advanced agents that can effectively handle most general tasks and they'll call it AGI and say they've done it. it won't really be AGI, but a lot of people will have bought into the lie thanks to the incredibly convincing facsimile they'll have created.
      • cootsnuck 14 hours ago
        > i think what'll happen is they'll get some really advanced agents that can effectively handle most general tasks and they'll call it AGI

        They likely won't wait even for that. Because that itself is still really far off.

  • paradite 16 hours ago
    This is an incredibly fascinating read into how OpenAI works.

    Some of the details seem rather sensitive to me.

    I'm not sure if the essay is going to stay up for long, given how "secretive" OpenAI is claimed to be.

  • mehulashah 15 hours ago
    While their growth is faster and technology different, the atmosphere feels very much like AWS back in 2014. I stayed for 8 years because I enjoyed it so much.
  • paul7986 2 hours ago
    I now have developed a hate with a small sprinkle of love relationship with AI.

    This past week I canceled my $20 subscription to GPT, advocated my friends do the same (i got them hooked) and just will be using Gemini from now on. It can create AI maps instantly for road trip travel routes to planning creek tubing trips and more. GPT does not do maps and I was paying $20 for it while Gemini is free? Bye!

    Further and more important this guy says in his blog he is happy to help with the destruction of our (white collar) society that will cause many MANY financial to emotional pain while he lives high off the hog? Impending 2030 depression .. 100 years later Im unfortunately betting!

    Now AI could indeed help us cure disease but if the major are destitute while a few hundreds or so live high of the hog the benefits of AI are canceled out.

    AI can definitely do the job ten people use to yet NOW its just one person typing into a text prompt to complete the tasks ten use.

    Why are we here..sprinting towards this goal of destruction let China destory themselves!!!

  • reducesuffering 1 day ago
    "Safety is actually more of a thing than you might guess if you read a lot from Zvi or Lesswrong. There's a large number of people working to develop safety systems. Given the nature of OpenAI, I saw more focus on practical risks (hate speech, abuse, manipulating political biases, crafting bio-weapons, self-harm, prompt injection) than theoretical ones (intelligence explosion, power-seeking). That's not to say that nobody is working on the latter, there's definitely people focusing on the theoretical risks. But from my viewpoint, it's not the focus."

    This paragraph doesn't make any sense. If you read a lot of Zvi or LessWrong, the misaligned intelligence explosion is the safety risk you're thinking of! So readers "guesses" are actually right that OpenAI isn't really following Sam Altman's:

    "Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could."[0]

    [0] https://blog.samaltman.com/machine-intelligence-part-1

  • tehnub 14 hours ago
    >As I see it, the path to AGI is a three-horse race right now: OpenAI, Anthropic, and Google.

    Sleeping on Keen Technology I see

  • sarthaksoni 16 hours ago
    Great read! As a software engineer sitting here in India, it feels like a privilege to peek inside how OpenAI works. Thanks for sharing!
  • maxnevermind 19 hours ago
    What I really wanted to know if OpenAI(and other labs for that matter) actually use their own products and not just casually but make LLM a core of how they operate. For example: using LLM for coding in prod, training/fine-tuning internal models for aligning on the latest updates, finding answer etc. Do they put their money where their mouth is, do LLMs help with productivity? There is no mention of it in the article, so I guess they don't?
    • neonbjb 13 hours ago
      Yes we do. If you worked at Google you know moma. Our moma is an internal version of chat. It is very good.
    • danenania 18 hours ago
      I don’t know, but I’d guess they are using them heavily, though in a piecemeal fashion.

      As impressive as LLMs can be at one-shotting certain kinds of tasks, working in a sprawling production codebase like the one described with tight performance constraints, subtle interdependencies, cross-cutting architectural concerns, etc. still requires a human driving most of the time. LLMs help a lot for this kind of work, but the human is either carefully assimilating their output or carefully choosing spots where (with detailed prompts) they can generate usable code directly.

      Again, just a guess, but this my impression of how experienced engineers (including myself) are using LLMs in big/nontrivial codebases, and I’ve seen no indication that engineering processes at the labs are much different from the wider industry.

  • ishita159 1 day ago
    this post was such a brilliant read. to read about how they still have a YC-style startup culture, are meritocratic, and people get to work on things they find interesting.

    as an early stage founder, i worry about the following a lot.

    - changing directions fast when i lose conviction - things breaking in production - and about speed, or the lack of it

    I learned to actually not worry about the first two.

    But if OpenAI shipped Codex in 7 weeks, small startups have lost the speed advantage they had. Big reminder to figure out better ways to solve for speed.

  • randometc 1 day ago
    What’s the GTM role referenced a couple of times in the post?
    • tptacek 1 day ago
      Go-to-market. Outbound marketing and sales, pipeline definition, analytics.
      • randometc 1 day ago
        That’s how I imagined it, kind of a hybrid of what I’ve seen called Product Marketing Manager and Product Analyst, but other replies and OpenAI job postings indicate maybe it’s a different role, more hands on building, getting from research to consumer product maybe?
    • koolba 1 day ago
      GTM = go to market

      An actual offering made to the public that can be paid for.

    • skywhopper 1 day ago
      “Go To Market”, ie the group that turns the tech into products people can use and pay for.
  • nsoonhui 21 hours ago
    He joined last year May and left recently. About one year of stay.

    I wonder one year is enough time for programmers to understand codebase, let alone meaningfully contributing patches? But then we see that job hopping is increasing common, which results in the drop in product qualities. I wonder what values are the job hoppers adding to the company.

    • tibbar 20 hours ago
      Well, since he worked on a brand-new product, it probably didn't matter too much.
  • throwawayohio 1 day ago
    > As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing.

    I appreciate where the author is coming from, but I would have just left this part out. If there is anything I've learned during my time in tech (ESPECIALLY in the Bay Area) it's that the people you didn't meet are absolutely angling to do the wrong thing(TM).

    • myaccountonhn 1 day ago
      I've been in circles with very rich and somewhat influential tech people and it's a lot of talk about helping others, but somehow beneath the veneer of the talk of helping others you notice that many of them are just ripping people off, doing coke and engaging in self-centered spiritual practices (especially crypto people).

      I also don't trust that people within the system can assess if what they're doing is good or not. I've talked with higher ups in fashion companies who genuinely believe their company is actually doing so much great work for the environment when they basically invented fast-fashion. I've felt it first hand personally how my mind slowly warped itself into believing that ad-tech isn't so bad for the world when I worked for an ad-tech company, and only after leaving did I realize how wrong I was.

      • beezlebroxxxxxx 2 hours ago
        I agree. I've met some very wealthy people before and when you're outside of the bubble you start to see how $$$ helps a lot to justify anything you do as helping people. A lot of wealthy people will say "look I contribute to this cause!" as an indulgence in the religious sense to "counteract" much of what they do in their day to day or the raison d'etre of their work.

        It's weird to say, but some people simply are not tuned in to the long term ramifications of their work. The "fuck you I got mine" mentality is even at play in many outwardly appearing progressive communities where short term gain is a moral imperative above doing good.

    • paxys 1 day ago
      And it's not just about some people doing good and others doing bad. Individual employees all doing the "right thing" can still be collectively steered in the wrong direction by higher ups. I'd say this describes the entirety of big tech.
    • jjulius 1 day ago
      When your work provides lunch in a variety of different cafeterias all neatly designed to look like standalone restaurants, directly across from which is an on-campus bank that will assist you with all of your financial needs before you take your company-operated Uber-equivalent to the next building over and have your meeting either in that building's ballpit, or on the tree-covered rooftop that - for some reason - has foxes on top, it's easy to focus only on the tiny "good" thing you're working on and not the steaming hot pile of garbage that the executives at your company are focused on but would rather you not see.

      Edit: And that's to say nothing of the very generous pay...

    • archagon 1 day ago
      Yes. We already know that Altman parties with extremists like Yarvin and Thiel and donates millions to far-right political causes. I’m afraid the org is rotten at its core. If only the coup had succeeded.
  • teiferer 14 hours ago
    >It's hard to imagine building anything as impactful as AGI,

    Where is this AGI that you've built then? The reason for the very existence of that term is an acknowledgement that what's hyped today as AI isn't actually what AI used to mean, but the hype cycle VC money depends on using the term AI, so a new term was invented to denote the thing the old term used to denote. Do we need yet another term because AGI is about to get burned the same way?

    > and LLMs are easily the technological innovation of the decade.

    Sorry, what? I'm sure it feels that way from some corners of that particular tech bubble, but my 73 year old mom's life is not impacted by LLMs at all - well, except for when she opens her facebook feed once a month and gets blasted with tons of fake BS. Really something to be proud of for us as an industry? A tech breakthrough of the last decade that might have literally saved her life were mRNA vaccines though, and I could likely come up with more examples if I thought about it for more than 3 seconds.

  • imiric 1 day ago
    Thanks for sharing.

    One thing I was interested to read but didn't find in your post is: does everyone believe in the vision that the leadership has shared publicly, e.g. [1]? Is there some skepticism that the current path leads to AGI, or has everyone drunk the Kool-Aid? If there is some dissent, how is it handled internally?

    [1]: https://blog.samaltman.com/the-gentle-singularity

    • tedsanders 1 day ago
      Not the author, but I work at OpenAI. There are wide variety of viewpoints and it's fine for employees to disagree on timelines and impact. I myself published a 100-page paper on why I think transformative AGI by 2043 is quite unlikely (https://arxiv.org/abs/2306.02519). From informal discussion, I think the vast majority of employees don't think that we're mere years from a post-scarcity utopia where we can drink mai tais on the beach all day. But there is a lot of optimism about the rapid progress in AI, and I do think that it's harder to forecast the path of a technology that has the potential to improve itself. So much depends on your definition of AGI. In a sense, GPT-4 is already AGI in the literal sense that it's an artificial intelligence with some generality. But in the sense of automating the economy, it's of course not close.
      • imiric 23 hours ago
        Thank you!

        The hype around this tech strongly promotes the narrative that we're close to exponential growth, and that AGI is right around the corner. That pretty soon AI will be curing diseases, eradicating poverty, and powering humanoid robots. These scenarios are featured in the AI 2027 predictions.

        I'm very skeptical of this based on my own experience with these tools, and rudimentary understanding of how they work. I'm frankly even opposed to labeling them as intelligent in the same sense that we think about human intelligence. There are certainly many potentially useful applications of this technology that are worth exploring, but the current ones are awfully underwhelming, and the hype to make them seem more than they are is exhausting. Not to mention that their biggest potential to further degrade public discourse and overwhelm all our communication channels with even more spam and disinformation is largely being ignored. AI companies love to talk about alignment and safety, yet these more immediate threats are never addressed.

        Anyway, it's good to know that there are disagreements about the impact and timelines even inside OpenAI. It will be interesting to see how this plays out, if nothing else.

        • mirekrusin 16 hours ago
          Instead of looking at absolute capabilities, look at their first and second derivatives.
      • criddell 23 hours ago
        > depends on your definition of AGI

        What definition of AGI is used at OpenAI?

        My definition: AGI will be here when you can put it in a robot body in the real word and interact with it like you would a person. Ask it to drive your car or fold your laundry or make a mai tai and if it doesn’t know how to do that, you show it, and then it can.

        • tedsanders 23 hours ago
          In the OpenAI charter, it's "highly autonomous systems that outperform humans at most economically valuable work."

          https://openai.com/charter/

          • criddell 19 hours ago
            Huh. That feels like kind of a weak definition.

            That makes me wonder what kinds of work aren’t economically valuable? Would that be services generally provided by government?

            • tedsanders 18 hours ago
              Maybe I'm biased, but I actually think it's a pretty good definition, as definitions go. All of our narrow measures of human intelligence that we might be tempted to use - win at games, solve math problems, ace academic tests, dominate at programming competitions - are revealed as woefully insufficient as soon as an AI beats them but fails to generalize far beyond. But if you have an AI that can generate lots of revenue doing a wide variety of real work, then you've probably built something smart. Diverse revenue is a great metric.
              • criddell 8 hours ago
                I also find it interesting that the definition always includes the "outperforms humans" qualifier. Maybe our first AGIs will underperform humans.

                Imagine I built a robot dog that behaved just like a biological dog. It bonds with people, can be trained, shows emotion, communicates, likes to play, likes to work, solves problems, understands social cues, and is loyal. IMHO, that would qualify as an AGI even though it isn't writing essays or producing business plans.

                • imiric 7 hours ago
                  > IMHO, that would qualify as an AGI even though it isn't writing essays or producing business plans.

                  I'm not sure it would, though. The "G" in AGI stands for "General", which a dog obviously can't showcase. The comparison must be done against humans, since the goal is to ultimately have the system perform human tasks.

                  The definition mentioned by tedsanders seems adequate to me. Most of the terms are fuzzy ("most", "outperform"), but limiting the criteria to economic value narrows it down to a measurable metric. Of course, this could be exploited by building a system that optimizes for financial gain over everything else, but this wouldn't be acceptable.

                  The actual definition is not that important, IMO. AGI, if it happens, won't appear suddenly from a singular event, but as a gradual process until it becomes widely accepted that we have reached it. The impact on society and our lives would be impossible to ignore at that point. The problem with this is that along the way there will be charlatans and grifters shouting from the rooftops that they've already cracked it, but this is nothing new.

                  • criddell 7 hours ago
                    > The "G" in AGI stands for "General", which a dog obviously can't showcase.

                    That isn't obvious to me at all. If you don't like the dog analogy, lets try another: Does a human toddler qualify as having general intelligence?

                    • imiric 2 hours ago
                      Hhmm good point.

                      I would say... yes. But with the strong caveat that when used within the context of AGI, the individual/system should be able to showcase that intelligence, and the results should be comparable to those of a neurotypical adult human. Both a dog and a toddler can show signs of intelligence when compared to individuals of their similar nature, but not to an adult human, which is the criteria for AGI.

                      This is why I don't think that a system that underperforms the average neurotypical adult human in "most" cognitive tasks would constitute AGI. It could certainly be considered a step in that direction, but not strictly AGI.

                      But again, I don't think that a strict definition of AGI is helpful or necessary. The impact of a system with such capabilities would be impossible to deny, so a clear definition doesn't really matter.

            • hobofan 14 hours ago
              > Would that be services generally provided by government?

              Most services provided by governments are economically valuable, as they provide infrastructure that allow individual actors to perform better, increasing collective economic output. (For e.g. high-expenditure infrastructure it could be quite easily argued though that they are not economically profitable.)

    • fragmede 1 day ago
      Externally there's no rigorous definition as to what constitutes AGI, so I'd guess internally it's not one monolithic thing they're targeting either. You'd need everyone to take a class about the nature of intelligence first, and all the different kinds of it just to begin with. There's undoubtedly dissent internally as to the best way to achieve chosen milestones on the way there, as well as disagreement that those are the right milestones to begin with. Think tactical disagreement, not strategic. If you didn't think that AGI were ever possible with LLMs, would you even be there to begin with?
      • imiric 1 day ago
        Well, Sam Altman has a clear definition of ASI, and AGI is something they've been thinking about for a long time, so presumably they must have some accepted definition of it.

        My question was whether everyone believes this vision that ASI is "close", and more broadly whether this path leads to AGI.

        > If you didn't think that AGI were ever possible with LLMs, would you even be there to begin with?

        People can have all sorts of reasons for working with a company. They might want to work on cutting-edge tech with smart people and infinite resources, for investment or prestige, but not necessarily buy into the overarching vision. I'm just wondering whether such a profile exists within OpenAI, and if so, how it is handled.

  • Vektorceraptor 14 hours ago
    My biggest problem with these new companies is their core philosophy. First, these companies generate their own demand — natural demand for their products rarely exists. Therefore, they act more like sellers than developers. Second, they always follow the same maxim: "What's the next logical step?" This naturally follows from the first premise, because this allows you to ignore everything "real". You are simply bound to logic. They have no "problems" to solve, yet they offer you solutions - simply as a logical consequence of their own logic. Has anyone ever actually asked if coders would use agents if it meant losing their jobs? Thirdly, this naturally brings to light the B2B philosophy. The customer is merely a catalyst that will eventually become superfluous. Fourth, the same excuse and ignorance of the form "(we don't know what we are doing, but) time will tell". What if time tells you "this is bad and you should and could have known better?"
  • Havoc 21 hours ago
    Interesting read!

    Discounting Chinese labs entirely for agi seems like a misstep though. I find it hard to believe there won’t be at least a couple contenders

    • ed_mercer 21 hours ago
      Well they don’t have (good) GPUs, so how are they going to seriously compete?
  • breadwinner 19 hours ago
    > As I see it, the path to AGI is a three-horse race right now: OpenAI, Anthropic, and Google.

    Umm... I don't think Zuckerberg would agree with this statement.

    • mirekrusin 16 hours ago
      ...or Musk, or chinese labs.
  • hoseja 10 hours ago
    >As a result, OpenAI is a very secretive place.

    The choice of name continues providing incredible amusement.

  • ujkiolp 5 hours ago
    this sounds awfully doctored
  • suncemoje 1 day ago
    „the right people can make magic happen“

    :-)

    • NoOn3 11 hours ago
      It is very strange to hear this in connection with OpenAI. After all, their goal is to save people from things like this...
  • rasmul 11 hours ago
    some thoughts

    - no such thing as open ai as a decision unit, there are people at the top who decide + shareholder pressure

    - a narcissist optimizes for himself and has no affective empathy: cruel, extremely selfish, image oriented, power hungry, liar etc.

    - having no structure means having a hidden structure and hence real power of the few above, no accountability (narcissists love this too)

    - framing this meritocracy is a positive framing, it is very easy to hide incompetence

    - people want to do good, great, perfect naive and you can utilize this motivation to let them burn out and work for YOUR goals as a leader... another narcissist is great doing this to people and the richest man in the world per share price

    all in all, having this kind of mindset is good for start ups or get the lucky punch, but AGI will be brought by Anthropic, Google and Ilya :) you will not have series of lucky punches, you have to have a direction

    I think Sam Altman, a terrible narcissist, uses open AI to feel great and he has no strategy but using others for their own benefit, because narcissists dont care, they just care about their image and power... and that is why open AI goes down... bundling with Microsoft was a big red flag in the first place...

    when i think of openAI, it is a bit like Netscape Navigator + Internet Explorer in one :)

    Anthropic is like Safari + Brave

    Google is like ... yeah :)

    Ilya is like Opera/Vivaldi or so

  • carimura 17 hours ago
    No way the newborn slept until 5:30 every morning.
  • VirusNewbie 14 hours ago
    Interesting that so many folks from Meta joined OpenAI - but Meta wasn't really able to roll its own competitive foundational model, so is that a bad sign?

    Kind of interesting that folks aren't impressed by Azure's offering. I wonder if OpenAI is handicapped by that as well, compared to being on AWS or GCP.

  • d--b 16 hours ago
    > As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing

    Of course they are. People in orgs like that are passionate, they want to work on the tech cause LLMs are once-in-a-lifetime tech breakthrough. But they don’t realize enough that they’re working for bad people. Ultimately all of that tech is in the hands of Altman, and that guy hasn’t proven to be the saint he hopes to become.

  • seydor 1 day ago
    seems like the whole thing was meant to be a jab at Meta
    • pchristensen 1 day ago
      I definitely didn't get that feeling. There was a whole section about how their infra resembles Meta and they've had excellent engineers hired from Meta.
    • ishita159 1 day ago
      was it?

      it was however interesting to know that it isn't just Meta poaching OpenAI, but the reverse also happened.

      • latency-guy2 22 hours ago
        Very apt, OpenAI's start was always poach-central, we know this from executive email leaks via Elon/Sam respectively.

        Any gibberish on any company's behalf of "poaching" is nonsense regardless IMO.

  • yahoozoo 23 hours ago
    It would be interesting to read the memoirs of former OpenAI employees that dive into whether they thought the company was on the right track towards AGI. Of course, that’s an NDA violation at best.
  • cess11 1 day ago
    20 years from now, the only people who will remember how much you worked is your family, especially your kids.

    Seems like an awful place to be.

  • krashidov 1 day ago
    > giant python monolith

    this does not sound fun lol

  • dagorenouf 1 day ago
    Maybe I’m paranoid but this sounds too good to be true. Almost like something planted to help with recruiting after meta poached their best guys.
    • Reubend 1 day ago
      The fact that they gave little shout outs at the end makes me think they wanted to avoid burning bridges by criticizing the company.
      • bink 1 day ago
        They almost certainly still own shares/options in the company.
      • istjohn 1 day ago
        They didn't mind burning MS
    • torginus 23 hours ago
      It sounds to me in contrast to the grandiose claims OpenAI tries to make about its own products - it views AI as 'regular technology', and is pragmatically tries to build viable products using it.
    • lucianbr 1 day ago
      > It's hard to imagine building anything as impactful as AGI, and LLMs are easily the technological innovation of the decade.

      I really can't see a person with at least minimal self-awareness talking their own work up this much. Give me a break dude. Plus, you haven't built AGI yet.

      Can't believe there's so little critique of this post here. It's incredibly self-serving.

      • sensanaty 11 hours ago
        Reading through the thread, it seems like half of the commenters work for OpenAI, so it'd make sense people aren't critiquing it much :p
  • brcmthrowaway 23 hours ago
    Lucky to be able to write this .. likely just vested with FU money!
  • bagxrvxpepzn 1 day ago
    He joins a proven unicorn at its inflection point and then leaves mere days after hitting his vesting cliff. All of this "learning" and "experience" talk is sopping wet with cynicism.
    • guywithabike 1 day ago
      He co-founded and sold Segment. You think he was just at OpenAI to collect a check? He lays out exactly why he joined OpenAI and why he's leaving. If you think everyone does things only for cynical reasons, it might be a reflection more of your personal impulses than others.
      • cainxinth 1 day ago
        Just because someone claims they are speaking in good faith doesn’t mean we have to take their word for it. Most people in tech dealing with big money are doing it for cynical reasons. The talk of changing the world or “doing something hard” is just marketing typically.
        • m00x 1 day ago
          Calvin works incredibly hard and has very little ego. I was surprised he joined OpenAI since he's loaded from the Segment acquisition, but if anyone it makes sense he would do this. He's always looking to find the hardest problem and work on it.

          That's what he did at Segment even in the later stages.

          • ml-anon 20 hours ago
            Someone putting their work project over their newborn in this circumstance (returning early from pay leave no less) is 100% ego driven.
            • mirekrusin 16 hours ago
              Newborns need constantly mom, not dad. Moms need husbands or their moms to help. The way it works is you agree what to do as a family (to do it or not to do it) and everybody is happy with their lives. You can be a great dad and husband and still do all of it when it makes sense and your wife supports it etc. Not having kids in the first place could be considered ego driven, not this.
              • ml-anon 13 hours ago
                Incredible that you've managed to post this from the 1950's
                • m00x 2 hours ago
                  No, he's right. I just went through newborn phase right now and the only person that needed is mom. Kid wanted nothing to do with me. He just wanted food and sleep.
                  • ml-anon 55 minutes ago
                    Your poor partner.
    • dang 1 day ago
      Can you please make your substantive points without crossing into personal attack and/or name-calling?

      https://news.ycombinator.com/newsguidelines.html

      • bagxrvxpepzn 1 day ago
        Sorry, I removed the personal attack.
        • dang 23 hours ago
          I appreciate the edit, but "sopping wet with cynicism" still breaks the site guidelines, especially this one: "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

          https://news.ycombinator.com/newsguidelines.html

          • bagxrvxpepzn 22 hours ago
            Understood, in the future I will refrain from questioning motives in featured articles. I can no longer edit my post but you may delete or flag it so that others will not be exposed to it.
            • dang 4 hours ago
              It's ok—we mostly just care about recalibrating for future posts. Thanks for the kind replies! As you can imagine, we don't always get those :)
    • tptacek 1 day ago
      I did not pick up much cynicism in this post. What about it seemed cynical to you?
      • bagxrvxpepzn 1 day ago
        Given that he leaves OpenAI almost immediately after hitting his 25% vesting cliff, it seems like his employment at OpenAI and this blog post (which makes him and OpenAI look good while making the reader feel good) were done cynically. I.e. primarily in his self-interest. What makes it even worse is his stated reason for leaving:

        > It's hard to go from being a founder of your own thing to an employee at a 3,000-person organization. Right now I'm craving a fresh start.

        This is just wholly irrational for someone whose credentials indicate someone who is capable of applying critical thinking towards accomplishing their goals. People who operate at that level don't often act on impulse or suddenly realize they want to do something different. It seems much more likely he intentionally planned to give himself a year of vacation at OpenAI, which allows him to hedge a bit while taking a breather before jumping back into being a founder.

        Is this essentially speculation? Yes. Is it cynical to assume he's acting cynically? Yes. Speculation on his true motives is necessary because otherwise we'll never get confirmation, short of him openly admitting to it (which is still fraught). We have to look at behaviors and actions and assess likelihoods from there.

        • tptacek 22 hours ago
          There's nothing cynical about leaving a job after cliffing. If a company wants a longer commitment than a year before issuing equity, it can set a longer cliff. We're all adults here.
          • bagxrvxpepzn 22 hours ago
            > There's nothing cynical about leaving a job after cliffing

            My criticism is that that's a detail that is being obscured and instead other explanations for leaving are being presented (cynically IMO).

            • tptacek 22 hours ago
              I don't see anything interesting about that detail; you keep trying to make something out of it, but there's nothing there to talk about.

              There might be some marginal drama to scrape up here if the post was negative about OpenAI (I'd still be complaining about trying to whip up drama where there isn't any), but it's kind of glowing about them.

              • bagxrvxpepzn 21 hours ago
                Well now the goalpost has shifted from "it's not cynical" to "even if it is cynical it doesn't matter" and dang has already warned me so I'm hesitant to continue this thread. I'll just say that once you recognize that a lot of the fluff in this article is cynically motivated, it reduces your risk of giving the information presented more meaning than is really there.
                • tptacek 21 hours ago
                  Yeah, I just read what Dan said you, and it makes sense, so we should wrap it up right here.
        • m00x 23 hours ago
          He's likely received hundreds of millions from segment acquisition. Do you think he cares about the OpenAI vesting cliff?

          It's more likely that he was there to see how OpenAI was run so he could learn and something similar on his own after.

  • zzzeek 1 day ago
    > On the other hand, you're trying to build a product that hundreds of millions of users leverage for everything from medical advice to therapy.

    ... then the next paragraph

    > As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing.

    not if you're trying to replace therapists with chatbots, sorry

  • tines 1 day ago
    Interesting how ChatGPT’s style of writing has made people start bolding so much text.
    • layer8 1 day ago
      I remember this being common business practice for written communication (email, design documents) circa 20 years ago, so that people at least read the important points, or can quickly pick them out again later.
    • isoprophlex 1 day ago
      Possibly the dumbest, blandest, most annoying kind of cultural transference imaginable. We dreamed of creating machines in our image, and now we're shaping ourselves in the image of our machines. Ugh.
      • enjoylife 18 hours ago
        I think we’ve always shaped ourselves based what we’re capable of building. Think of how infrastructure such as buildings and roadways shape our lives within them. What I do agree with you, is how LLMs are shaping our mental thought how we are offloading a lot of our mental capacities with blind trust in the LLM output.
    • pchristensen 1 day ago
      People have bolded important points to make text easier to scan long before AI.
  • AIorNot 1 day ago
    I'm 50, worked at few cool places and lots of boring ones. to paraphrase, Tolstoy tends to be right -all happy families are similar and unhappy families are unhappy in unique ways

    OpenAI is currently selected for the brightest and young excited minds, (and a lot of money).. bright, young (as in full of energy) and excited people will work well anywhere- esp if given a fair amount of autonomy.

    Young people talking about how hard they worked is not a sign of a great corp culture, just a sign that they are in the super excited stage of their careers

    In the long run who knows, I tend to view these companies as groups of like minded people and groups of people change and the dynamic changees overnight -so if they can sustain that culture sure, but who knows..

    • tptacek 1 day ago
      I said this elsewhere on the thread and so apologize for repeating, but: I know mid-career people working at this firm who have been through these conditions, and they were energized by the experience. They're shipping huge stuff that tens of millions of people will use almost immediately.

      The cadence we're talking about isn't sustainable --- has never been sustained anywhere --- but if insane sprints like this (1) produce intrinsically rewarding outcomes and (2) punctuate otherwise-sane work conditions, they can work out fine for the people involved.

      It's completely legit to say you'd never take a job where this could be an expectation.

    • rogerkirkness 1 day ago
      Calvin is the founder/CTO of Segment, not old but also not some doe eyed new grad.
      • jonas21 1 day ago
        On one hand, yes. But on the other hand, he's still in his 30s. In most fields, this would be considered young / early career. It kind of reinforces the point that bright, young people can get a lot done in the tech world.
        • m00x 1 day ago
          Calvin is loaded from the Segment exit, he would not work if he wasn't excited about the work. The other founders just went on to do their own thing or non-profits.

          I worked there for a few years and Calvin is definitely more of the grounded engineering guy. He would introduced him as an engineer and just get talking code. He would spend most of his time with the SRE/core team trying to tackle the hardest technical problem at the company.

        • paulcole 1 day ago
          > In most fields, this would be considered young / early career

          Is it considered young / early career in this field?

  • bawana 1 day ago
    This is a politically correct farewell letter. Obviously something we little people who need jobs have to resort to so the next HR manager doesn't think we are a risk to stock valuation. For a deeper understanding, read Empire of AI by Karen Hao. She defrocks Sam Altman to reveal he is just another human. Like Steve Jobs, he is an adept salesman appealing to the naïve altruistic sentiments of humans while maintaining his singular focus on scale. Not so different from the archetype of Rockefeller in his pursuit of monopoly through scale using any means, sam is no different than google which even forgot its own rallying cry ‘dont be evil’. Other actors in the story seem to have been infected by the same meme virus, leaving openAI for their own empires- Musk left after he and altman conflicted over who would be CEO.(birth of xAI). Amodei, his sister and others left to start anthropic. Sutskever left to start ‘safe something or other’(smacks of the same misdirection sam used when openAI formed as a nonprofit ) giving the idea of a nonprofit a mantle of evil since OPENAI has pivoted to profit.

    The bottom line is that scaling requires money and the only way to get that in the private sector is to lure those with money with the temptation they can multiply their wealth.

    Things could have been different in a world before financial engineers bankrupted the US (the crises of enron, salomon bros, 2008 mortgage debacle all added hundreds of billions to us debt as the govt bought the ‘too big to fail’ kool-aid and bailed out wall street by indenturing main street). Now 1/4 of our budget is simply interest payment on this debt. There is no room for govt spending on a moonshot like AI. This environment in 1960 would have killed Kennedy’s inspirational moonshot of going to the moon while it was still an idea in his head in his post coital bliss with Marilyn at his side.

    Today our govt needs money just like all the other scrooge-infected players in the tower of debt that capitalism has built.

    Ironically it seems china has a better chance now. It seems its release of deep seek and the full set of parameters is giving it a veneer of altruistic benevolence that is slightly more believable than what we see here in the west. China may win simply on thermodynamic grounds. Training and research in DL consumes terawatt hours and hundreds of thousands of chips. Not only are the US models on older architectures (10-100x more energy inefficient) but the ‘competition’ of multiple players in the US multiplies the energy requirements.

    Would govt oversight have been a good thing? Imagine if General Motors, westinghouse, bell labs, and ford competed in 1940 each with their own manhattan project to develop nuclear weapons ? Would the proliferation of nuclear have resulted in human extinction by now?

    Will AI’s contribution to global warming be just as toxic global thermonuclear war?

    These are the questions that come to mind after Hao’s historic summary.

    • bnop 19 hours ago
      100%
  • b0a04gl 3 hours ago
    [dead]
  • armstrong10 11 hours ago
    [dead]
  • soygem 22 hours ago
    [dead]
  • codemac 1 day ago
    [flagged]
  • kraig911 17 hours ago
    [flagged]
  • vouaobrasil 1 day ago
    > The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement. Anybody in the world can jump onto ChatGPT and get an answer, even if they aren't logged in.

    I would argue that there are very few benefits of AI, if any at all. What it actually does is create a prisoner's dilemma situation where some use it to become more efficient only because it makes them faster and then others do the same to keep up. But I think everyone would be FAR better off without AI.

    What keeping AI free for everyone is akin to is keeping an addictive drug free for everyone so that it can be sold in larger quantities later.

    One can argue that some technology is beneficial. A mosquito net made of plastic immediately improves one's comfort if out in the woods. But AI doesn't really offer any immediate TRUE improvement of life, only a bit more convenience in a world already saturated in it. It's past the point of diminishing returns for true life improvement and I think everyone deep down inside knows that, but is seduced by the nearly-magical quality of it because we are instinctually driven to seek out advantags and new information.

    • simonw 1 day ago
      "I would argue that there are very few benefits of AI, if any at all."

      OK, if you're going to say things like this I'm going to insist you clarify which subset of "AI" you mean.

      Presumably you're OK with the last few decades of machine learning algorithms for things like spam detection, search relevance etc.

      I'll assume your problem is with the last few years of "generative AI" - a loose term for models that output text and images instead of purely being used for classification.

      Are predictive text keyboards on a phone OK (tiny LLMs)? How about translation engines like Google Translate?

      Vision LLMs to help with wildlife camera trap analysis? How about to help with visual impairments navigate the world?

      I suspect your problem isn't with "AI", it's with the way specific AI systems are being built and applied. I think we can have much more constructive conversations if we move beyond blanket labeling "AI" as the problem.

      • vouaobrasil 1 day ago
        1. Here is the subset: any algorithm, which is learning based, trained on a large data set, and modifies or generates content.

        2. I would argue that translation engines have their positives and negatives, but a lot of them are negative, because they lead to translators losing their jobs, and a loss in general for the magical qualities of language learning.

        3. Predictive text: I think people should not be presented with possible next words, and think of them on their own, because that means they will be more thoughtful in their writing and less automatic. Also, with a higher barrier to writing something, they will probably write less and what they do write will be of greater significance.

        4. I am against all LLMs, including wildlife camera trap analysis. There is an overabundance of hiding behind research when we really already know the problem fairly well. It's a fringe piece of conservation research anyway.

        5. Visual impairments: one can always appeal to helping the disabled and impaired, but I think the tradeoff is not worth the technological enslavement.

        6. My problem is categorically with AI, not with how it is applied, PRECISELY BECAUSE AI cannot be applied in an ethical way, since human beings en masse will inevitably have a sufficient number of bad actors to make the net effect always negative. It's human nature.

        • pj_mukh 1 day ago
          I wish your parent comment didn't get downvoted, because this is an important conversation point.

          "PRECISELY BECAUSE AI cannot be applied in an ethical way, since human beings en masse will inevitably have a sufficient number of bad actors"

          I think this is vibes based on bad headlines and no actual numbers (and tbf, founders/CEO's talking outta their a**). In my real-life experience the advantages of specifically generative AI far outweighs the disadvantages, by like a really large margin. I say this as someone academically trained on well modeled Dynamical systems (the opposite of Machine Learning). My team just lost. Badly.

          Case-in-point: I work with language localization teams that have fully adopted LLM based translation services (our DeepL.com bills are huge), but we've only hired more translators and are processing more translations faster. It's just..not working out like we were told in the headlines. Doomsday Radiologist predictions [1], same thing.

          [1]: https://www.nytimes.com/2025/05/14/technology/ai-jobs-radiol...

          • vouaobrasil 1 day ago
            > I think this (esp the sufficient number of bad actors) is vibes based on bad headlines and no actual numbers. In my real-life experience the advantages of specifically generative AI far outweighs the disadvantages, by like a really large margin.

            We define bad actors in different ways. I also include people like tech workers, CEOs who program systems that take away large numbers of jobs. I already know people whose jobs were eroded based on AI.

            In the real world, lots of people hate AI generated content. The advantages you speak of are only to those who are technically minded enough to gain greater material advantages from it, and we don't need the rich getting richer. The world doesn't need a bunch of techies getting richer from AI at the expense of people like translators, graphic designers, etc, losing their jobs.

            And while you may have hired more translators, that is only temporary. Other places have fired them, and you will too once the machine becomes good enough. There will be a small bump of positive effects in the short term but the long term will be primarily bad, and it already is for many.

            • pj_mukh 1 day ago
              I think we'll have to wait and see here, because all the layoffs can be easily attributed to leadership making crappy over-hiring decisions over COVID and now not being able to admit to that and giving hand-wavy answers over "I'm firing people because AI" to drive different headline narratives (see: founders/CEO's talking outta their a**).

              It may also be the narrative fed to actual employees, saying "You're losing your job because AI" is an easy way to direct anger away from your bad business decisions. If a business is shrinking, it's shrinking, AI was inconsequential. If a business is growing AI can only help. Whether it's growing or shrinking doesn't depend on AI, it depends on the market and leadership decision-making.

              You and I both know none of this generative AI is good enough unsupervised (and realistically, with deep human edits). But they're still massive productivity boosts which have always been huge economic boosts to the middle-class.

              Do I wish this tech could also be applied to real middle-class shortages (housing, supply-chain etc.), sure. And I think it will come.

        • simonw 1 day ago
          Thanks for this, it's a good answer. I think "generative AI" is the closest term we have to that subset you describe there.
          • vouaobrasil 1 day ago
            Just to add one final point: I included modification as well as generation of content, since I also want to exclude technologies that simply improve upon existing content in some way that is very close to generative but may not be considered so. For example: audio improvent like echo removal, ML noise removal, which I have already shown to interpolate.

            I think AI classification and stuff like classification is probably okay but of course with that, as with all technologies, we should be cautious of how we use it as it can be used also in facial recognition, which in turn can be used to create a stronger police state.

    • christiangenco 1 day ago
      > I would argue that there are very few benefits of AI, if any at all. What it actually does is create a prisoner's dilemma situation where some use it to become more efficient only because it makes them faster and then others do the same to keep up. But I think everyone would be FAR better off without AI.

      Personally, my life has significantly improved in meaningful ways with AI. Apart from the obvious work benefits (I'm shipping code ~10x faster than pre-AI), LLMs act as my personal nutritionist, trainer, therapist, research assistant, executive assistant (triaging email, doing SEO-related work, researching purchases, etc.), and a much better/faster way to search for and synthesize information than my old method of using Google.

      The benefits I've gotten are much more than conveniences and the only argument I can find that anyone else is worse off because of these benefits is that I don't hire junior developers anymore (at max I was working with 3 for a contracting job). At the same time, though, all of them are also using LLMs in similar ways for similar benefits (and working on their own projects) so I'd argue they're net much better off.

      • vouaobrasil 1 day ago
        A few programmers being better off does not make an entire society better off. In fact, I'd argue that you shipping code 10x faster just means in the long run that consumerism is being accelerated at a similar rate because that is what most code is used for, eventually.
        • simonw 1 day ago
          I spent much of my career working on open source software that helped other engineers ship code 10x faster. Should I feel bad about the impact my work there had on accelerating consumerism?
          • vouaobrasil 1 day ago
            I don't know if you should feel bad or not, but even I know that I have a role to play in consumerism that I wish I didn't.

            That doesn't necessitate feeling bad because the reaction to feel good or bad about something is a side effect of the sort of religious "good and evil" mentality that probably came about due to Christianity or something. But *regardless*, one should at least understand that because our world has reached a sufficient critical mass of complexity, even the things we do that we think are benign or helpful can have negative side effects.

            I never claim that we should feel bad about that, but we should understand it and attempt to mitigate it nonetheless. And, where no mitigation is possible, we should also advocate for a better societal structure that will eventually, in years or decades, result in fewer deleterious side effects.

            • simonw 1 day ago
              The TV show The Good Place actually dug into this quite a bit. One of the key themes explored in the show was the idea that there is no ethical consumption under capitalism, because eventually the things you consume can be tied back to some grossly unethical situation somewhere in the world.
              • jfyi 1 day ago
                That theme was primarily explored through the idea it's impossible to live a truly ethical life in the modern world due to unknowable externalities.

                I don't think the takeaway was meant to really be about capitalism but more generally the complexity of the system. That's just me though.

    • ookblah 1 day ago
      i don't really understand this thought process. all technology has it's advantages and drawbacks and we are currently going through the hype and growing pains process.

      you could just as well argue the internet, phones, tv, cars, all adhere to the exact same prisoner's dilemma situation you talk about. you could just as well use AI to rubber duck or ease your mental load than treat it like some rat-race to efficiency.

      • vouaobrasil 1 day ago
        True, but it is meaningful to understand whether the "quantity" advantages - drawbacks decreases over time, which I believe it does.

        And we should indeed apply the logic to other inventions: some are more worth using than others, whereas in today's society, we just use all of them due to the mechanisms of the prisoner's dilemma. The Amish, on the other hand, apply deliberation on whether to use certain technologies, which is a far better approach.

    • 8note 1 day ago
      hiding from mosquitos under your net is a negative. the point of going out to the woods is to be bitten by mosquitos and youve ruined it.

      its impossible to get benefit from the woods if youve brought a bug net, and you should stay out rather than ruining the woods for everyone

      • vouaobrasil 1 day ago
        Rather myopic and crude take, in my opinion. Because if I bring out a net, it doesn't change the woods for others. If I introduce AI into society, it does change society for others, even those who don't want to use the tool. You have really no conception of subtlety or logic.

        If someone says driving at 200mph is unsafe, then your argument is like saying "driving at any speed is unsafe". Fact is, you need to consider the magnitude and speed of the technology's power and movement, which you seem incapable of doing.

    • i000 1 day ago
      [flagged]
      • vouaobrasil 1 day ago
        Nobody decides, but that doesn't mean we shouldn't discuss and figure out if there is an optimal point.

        Edit: And I think you might dislike automobiles if you were one of the people living right next to a tyre factory in Brazil, which outputs an extremely disgusting rubber smell on an almost daily basis. Especially if you bought your house before the factory was built, and you don't drive much.

        But you probably live in North America and don't give a darn about that.

        • i000 1 day ago
          I think this is pretty much how many Amish communities function. As for me, I prefer making decisions on how to use technology in my own life on my own.
          • vouaobrasil 1 day ago
            Of course that makes sense. But for instance, with SOME technologies, I would prefer not to use them but still sort of have to because some of them become REQUIRED. For example: phones. I would prefer not to have a telephone at all as I hate them with a passion, but I still want a bank account. But that's difficult because my bank requires 2FA and it's very hard to get out of it.

            So, while I agree in prinicple that it's nice to make decisions on one's own, I think it would be also nice to have the choice to avoid certain technologies that become difficult to avoid due to their entrenchment.

  • smeeger 1 day ago
    > everyone I met there is actually trying to do the right thing

    making human beings obsolete is not the right thing. nobody in openAI is doing the right thing.

    in another part of the post he says safety teams work primarily on making sure the models dont say anything racist as well as limiting helpful tips on building weapons of terror… and that AGI safety is basically not a focus. i dont think this company should be allowed to exist. they dont have ANY right to threaten the existence and wellbeing of me and my kids!

  • solarized 22 hours ago
    > As I see it, the path to AGI is a three-horse race right now: OpenAI, Anthropic, and Google. Each of these organizations are going to take a different path to get there based upon their DNA (consumer vs business vs rock-solid-infra + data).

    Grok be like. okey. :))