It is fairly rare to see an ex-employee put a positive spin on their work experience.
I don't think this makes OpenAI special. It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.
Look at it this way: the flip side of "incredibly bottoms-up" from this article is that there are people who feel rudderless because there is no roadmap or a thing carved out for them to own. Similarly, the flip side of "strong bias to action" and "changes direction on a dime" is that everything is chaotic and there's no consistent vision from the executives.
This cracked me up a bit, though: "As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing" - yes! That's true at almost every company that ends up making morally questionable decisions! There's no Bond villain at the helm. It's good people rationalizing things. It goes like this: we're the good guys. If we were evil, we could be doing things so much worse than X! Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!
I would never post any criticism of an employer in public. It can only harm my own career (just as being positive can only help it).
Given how vengeful Altman can reportedly be, this goes double for OpenAI. This guy even says they scour social media!
Whether subconsciously or not, one purpose of this post is probably to help this guy’s own personal network along; to try and put his weirdly short 14-month stint in the best possible light. I think it all makes him look like a mark, which is desirable for employers, so I guess it is working.
Wow, ditto! I thought I was the only one who took an extended leave to watch their baby grow up. Totally worth it, and it was a wonderful experience being able to focus 100% on her.
Way to go to keep the boring chores of the first months with the partner and join the fun when the little one starts to be more fun after a year. With all that cash, I'm sure they could buy a bunch of help for the partner too.
I don't know, when I became a parent I was in for the full ride, not to have someone else raising her. Yes, raising includes changing diapers and all that.
You make it sound like your choice is somehow the righteous one. I'm not convinced. What's wrong with a hiring help, as long as it's well selected? And anyway, usually the help would take care of various errands to free up mom so she can focus on her baby. But maybe they have happily involved grandparents. Maybe he was working part-time. Or maybe there's some other factor we're completely missing on right now.
So you sincerely think it’s ok that everybody takes care of the kid but the father because he’s rich and can afford multiple nannies? There’s not much context to miss when TFA has this:
> The Codex sprint was probably the hardest I've worked in nearly a decade. Most nights were up until 11 or midnight. Waking up to a newborn at 5:30 every morning. Heading to the office again at 7a. Working most weekends.
Does a household necessarily need multiple nannies to raise a baby? Grandparents might be willing to help and if there's some house help as well, no nannies might be needed at all, as long as the wife is happy with the arrangement, which I don't find impossible to entertain. Yeah, wealth allows for more freedom of choice, that's always been the case, but this type of arrangement is not unheard of across social classes.
The people who will disagree with this statement would say, full throated, that what really mattered was shipping on time.
Couldn't be me. I do my work, then clock the fuck off, and I don't even have kids. I wasn't put upon this earth to write code or solve bugs, I just do that for the cash.
Yeah, or, let the partner have the easy period before they are mobile, and when they sleep half the day, and then join the fun when they can walk off into the craft supplies/pantry where sugar/flour/etc. are stored/the workshop with the power tools etc., and when they drop the naptime and instead start waking at 5am and asking you to play Roblox with them.
There is some parenting, then there is good parenting. Most people don't have this option due to finances, but those that do and still avoid it to pick up just easy and nice parts - I don't have much sympathy nor respect for them.
Then later they even have the balls to complain how kids these days are unruly, never acknowledging massive gaps in their own care.
Plus it certainly helps the kid with bonding, emotional stability and keeps the parent more in touch emotionally with their own kid(s).
You do know that early bonding experiences of newborns are crucial for their lifelong development? It reads like satire, or, if serious, plain child maltreatment.
It's unlikely he sees or even perceives what he's doing as a grind, but rather something akin to an exciting and engrossing chase or puzzle. If my mental model of these kind of Silicon Valley types is correct, neither is he likely to be in it for the money, at least not at the narrative self level. He most likely was "feelin' the AGI", in Ilya Sutskever's immortal words. I.e. feeling like this might be a once-in-a-million-years opportunity to birth a new species, if not a deity even.
lots of wealthy families have dysfunctional internal emotional patterns.. A quick stat is that there is more alcoholism among the 1% wealthiest than the general population across the USA
Some books do a good job of documenting the power struggles that happen behind closed doors, big egos backed by millions clashing over ideas and control.
Not gonna lie, the entire article reads more like a puff piece than an honest reflection. Feels like something went down on Slack, some doors got slammed, and this article is just trying to keep them unlocked. Because no matter how rich you are in the Valley, if you're not on good terms with Sam, a lot of doors will close. He's the prodigy son of the Valley, adopted by Bill Gates and Peter Thiel, and secretly admired by Elon Musk. With Paul Graham's help, he spent 10 years building an army of followers by mentoring them and giving them money. Most of them are now millionaires with influence. And now, even the most powerful people in tech and politics need him. Jensen Huang needs his models to sell servers. Trump needs his expertise to upgrade defence systems. I saw him shaking hands with an Arab sheikh the other day. The kind of handshake that says: with your money and my ambition, we can rule the world.
That's even more of a reason not to bad mouth other billionaires/billion dollar companies. Billionaires and billion dollar companies work together all the time. It's not a massive pool. There is a reason beef between companies and top level execs and billionaires is all rumors and tea-talk until a lawsuit drops out of no where.
You think every billionaire is gonna be unhinged like Musk calling the president a pedo on twitter?
Hebephile or ephebophile rather than pedo to be precise. And we all saw how great friend he was with epstein for decades, frequent visitor to his parties, dancing together and so on. Not really a shocking statement, whether true or not.
I am not trying to advance wild hypotheticals, but something about his behavior does not quite feel right to me. Someone who has enough money for multiple lifetimes, working like he's possessed, to launch a product minimally different than those at dozens of other companies, and leaving his wife with all the childcare, then leaving after 14 months and insisting he was not burnt out but without a clear next step, not even, "I want to enjoy raising my child".
His experience at OpenAI feels overly positive and saccharine, with a few shockingly naive comments that others have noted. I think there is obvious incentive. One reason for this is, he may be in burnout, but does not want to admit it. Another is, he is looking to the future: to keep options open for funding and connections if (when) he chooses to found again. He might be lonely and just want others in his life. Or to feel like he's working on something that "matters" in some way that his other company didn't.
I don't know at all what he's actually thinking. But the idea that he is resistant to incentives just because he has had a successful exit seems untrue. I know people who are as rich as he is, and they are not much different than me.
Calvin just worked like this when I was at Segment. He picked what he worked on and worked really intensely at it. People most often burn out because of the lack of agency, not hours worked.
Also, keep in mind that people aren't the same. What seems hard to you might be easy to others, vice versa.
first time in 93 because of burnout from three peat, and allegedly a gambling problem. second because of the lockout and krause pushing phil out. third because too old
This reflection seems very unlikely to be authentic because it is full of superlatives and not a single bad thing (or at least not great) is mentioned. Real organizations made of real humans simply are not like this.
The fact that several commenters know the author personally goes some way to explain why the entire comment section seems to have missed the utterly unbalanced nature of the article.
I've always heard horror stories about Amazon, but when I speak to most people at, or from Amazon, they have great things to say. Some people are just optimists, too.
People come out to defend their bosses a lot on this site, convincing themselves they know the powerful people best, that they’re “friends”. How can someone be so confident that a founder is authentic, when a large part of their job is to make you believe so (regardless of whether they are), and the employee’s own self image push them to believe it too?
Every, and I mean every, technology company scours social media. Amazon has a team that monitors social media posts to make sure employees, their spouses, their friends don’t leak info, for example.
> There's no Bond villain at the helm. It's good people rationalizing things.
I worked for a few years at a company that made software for casinos, and this was absolutely not the case there. Casinos absolutely have fully shameless villains at the helm.
Interesting. A year ago I joined one of the larger online sportsbook/casinos. In terms of talent, employees are all over the map (both good and bad). But I have yet to meet a villain. Everyone here is doing the best they can.
That may very well be the case. But I think this is a distinct category of evil; the second one, in which you'll find most of the cigarette and gambling businesses, is that of evil caused by indifference.
"Yes, I agree there are some downsides to our product and there are some people suffering because of that - but no one is forcing them to buy from us, they're people with agency and free will, they can act as adults and choose not to buy. Now what is this talk about feedback loops and systemic effects? It's confusing, go away."
This category is where you'll also find most of the advertising business.
The self-righteous may be the source of the greatest evil by magnitude, but day-to-day, the indifferents make it up in volume.
It's not indifference, it's much more comically evil. Like, they're using software to identify gambling addicts on fixed incomes, to figure out how big retirees' social security checks are, and to ensure they lose the entire thing at the casino each week. They bonus out their marketing team for doing this successfully. They're using software to make sure that when a casino host's patron runs out of money and kills themselves, the casino host is not penalized but rewarded for a job well done.
At 8am every morning, the executives walk across the casino floor on their way to the board room, past the depressed people who have been there gambling by themselves the entire night, seeing their faces, then they go into a boardroom to strategize ways to get them those people to gamble even harder. They brag about it. It's absolute pure villainy.
I wouldn't know if this is a fair characterization of other companies, but it certainly isn't anything like what I observe here. If you can't name names, I'm going to guess you just made this up.
Some people like to smoke. I find it disgusting myself, but as long as people want the experience I see no reason why someone else shouldn't be allowed to sell it to them. See also alcohol, drugs, porn, motorcycles, experimental aircraft, whatever.
We can have all sorts of interesting discussions about how to balance human independence with shared social costs, but it's not inherently "evil" to give consenting adults products and experiences they desire.
IMO, much more evil is caused by busybodies trying to tell other people what's good for them. See: The Drug War.
I disagree. The health burden from smoking is approximately the same as death toll as the sum of all in the Holocaust, but smoking does it every nine months. And 1.3 million/year of those are non-smokers who are dying because they are exposed to second-hand smoke: https://ourworldindata.org/smoking
Even when the self-righteous are at their most dangerous, they have to be self-righteous and in power, e.g.:
There are jobs in which one may find oneself where doing them poorly is better for the world than doing them well.
I think you and your colleagues should sit back and take it easy, maybe have a few beers every lunchtime, install some video games on the company PCs, anything you can get away with. Don't get fired (because then you'll be replaced by keen new hires), just do the minimum acceptable and feel good about that karma you're accumulating as a brake on evil.
There is a reason why there was a cult-like behaviour on X amongst the employees in supporting to bringing back Sam as CEO when he was kicked out by the OpenAI board of directors at the time.
"OpenAI is nothing without it's people"
All of "AGI" (which actually was the lamborghinis, penthouses, villas and mansions for the employees) was all on the line and on hold if that equity went to 0 or would be denied selling their equity if they openly criticized OpenAI after they left.
Yes, and the reason for that is that employees at OpenAI believed (reasonably) that they were cruising for Google-scale windfall payouts from their equity over a relatively short time horizon, and that Altman and Brockman leaving OpenAI and landing at a well-funded competitor, coupled with OpenAI corporate management that publicly opposed commercialization of their technology, would torpedo those payouts.
I'd have sounded cult-like too under those conditions (but I also don't believe AGI is a thing, so would not have a countervailing cult belief system to weigh against that behavior).
I also believe that AGI is not a thing, but for different reasons. I notice that almost everybody seems to implicitly assume, without justification, that humans are a GI (general intelligence). I think it's easy to see that if we are not a GI, then we can't see what we're missing, so it will feel like we might be GI when we're really not. People also don't seem interested in justifying why humans would be GI but other animals with 99% of the same DNA aren't.
My main reason for thinking general intelligence is not a thing is similar to how Turing completeness is not a thing. You can conceptualize a Turing machine, but you can't actually build one for real. I think actual general intelligence would require an infinite brain.
> I notice that almost everybody seems to implicitly assume, without justification, that humans are a GI (general intelligence). I think it's easy to see that if we are not a GI, then we can't see what we're missing, so it will feel like we might be GI when we're really not.
That's actually a great point which I'd never heard before. I agree that it's very likely that us humans do not really have GI, but rather only the intelligence that evolved stochastically to better favour our existence and reproduction, with all its positive and negative spandrels[0]. We can call that human intelligence (HI).
However, even if our "general" intelligence is a mirage, surely what most people imagine when they talk about 'AGI' is actually AHI, as in an artificial intelligence that has the same characteristics as human intelligence that in their own hubris they believe is general. Or are you making a harder argument, that human intelligence may not actually have the ability to create AHI?
If we were to believe the embodiment theory of intelligence (it’s by far not the only one out there, but very influential and convincing), this means that building an AGI is an equivalent problem to building an artificial human. Not a puppet, not a mock, not “sorta human”, but real, fully embodied human, down to gut bacterial biome, because according to the embodiment theory, this affects intelligence too.
In this formulation, it’s pretty much as impossible as time travel, really.
Sure, if we redefine "AGI" to mean "literally cloning a human biologically", then AGI suddenly is a very different problem (mainly one of ethics, since creating human clones, educating, brainwashing, and forcing them to respond to chat messages ala chatGPT has a couple ethical issues along the way).
I don't see how claiming that intelligence is multi-faceted makes AGI (the A is 'artificial' remember) impossible.
Even if _human_ intelligence requires eating yogurt for your gut biome, that doesn't preclude an artificial copy that's good enough.
Like, a dog is very intelligent, a dog can fetch and shake hands because of years of breeding, training, and maybe from having a certain gut biome. Boston Dynamics did not have to understand a single cell of the dog's stomach lining in order to make dog-robots perfectly capable of fetching and shaking hands.
I get that you're saying "yes, we've fully mapped the neurons of a fruit fly and can accurately simulate and predict how a fruit fly's brain's neurons will activate, and can create statistical analysis of fruit-fly behavior that lets us accurately predict their action for much cheaper even without the brain scan, but human brains are unique in a way where it is impossible to make any sort of simulation or prediction or facsimile that is 'good enough' because you also need to first take some bacteria from one of peter thiel's blood boys and shove it in the computer, and if we don't then we can't even begin to make a facsimile of intelligence". I just don't buy it.
“AGI” isn’t a thing and never will be. It fails even really basic scrutiny. The objective function of a human being is to keep its biological body alive and reproduce. There is no such similar objective on which a ML algorithm can be trained. It’s frankly a stupid idea propagated by people with no meaningful connection to the field and no idea what the fuck they’re talking about.
We will look back on this and the early OpenAI employees (who sold) will speak out in documentaries and movies in a decades time and they will admit that "AGI" was a period of easy dumb money.
OpenAI never enforced this, removed it, and admitted it was a big mistake. I work at OpenAI and I'm disappointed it happened but am glad they fixed it. It's no longer hanging over anyone's head, so it's probably inaccurate to suggest that Calvin's post is positive because he's trying to protect his equity from being taken. (though of course you could argue that everyone is biased to be positive about companies they own equity in, generally)
The tender offer limitations still are, last I heard.
Sure, maybe OA can no longer cancel your vested equity for $0... but how valuable is (non-dividend-paying) equity you can't sell? (How do you even borrow against it, say?)
(It would be a pretty fake solution if equity cancellation was halted, but equity could still be frozen. Cancelled and frozen are de facto identical until the first dividend payment, which could take decades.)
The Silenced No More Act" (SB 331), effective January 1, 2022, in California, where OpenAI is based, limits non-disparagement clauses and retribution by employers, likely making that illegal in California, but I am not a lawyer.
Here's what I think - while Altman was busy trying to convince the public the AGI was coming in the next two weeks, with vague tales that were equaly ominous and utopistic, he (and his fellow leaders) have been extremely busy at trying hard to turn OpenAI into a product company with some killer offerings, and from the article, it seems they were rather good and successful in that.
Considering the high stakes, money, and undoubtedly the ego involved, the writer might have acquired a few bruises along the way, or might have lost out on some political in fights (remember how they mentioned they built multiple Codex prototypes, it must've sucked to see some other people's version chosen instead of your own).
Another possible explanation is that the writer's just had enough - enough money to last a lifetime, just started a family, made his mark on the world, and was no longer compelled (or have been able to) keep up with methed-up fresh college grads.
> remember how they mentioned they built multiple Codex prototypes, it must've sucked to see some other people's version chosen instead of your own
Well it depends on people’s mindset. It’s like doing a hackathon and not winning.
Most people still leave inspired by what they have seen other people building, and can’t wait to do it again.
…but of course not everybody likes to go to hackathons
> OpenAI is perhaps the most frighteningly ambitious org I've ever seen.
That kind of ambition feels like the result of Bill Gates pushing Altman to the limit and Altman rising to the challenge. The famous "Gates demo" during the GPT‑2 days comes to mind.
Having said that, the entire article reads more like a puff piece than an honest reflection.
It might be one of the cover stories for a Bond villain, but they have lots of mundane cover stories. Which isn't to say you're wrong, I've learned not to trust my gut in the category (rich business leaders) to which he belongs.
I'd be more worried about the guy who tweeted “If this works, I’m treating myself to a volcano lair. It’s time.” and more recently wore a custom T-shirt that implies he's like Vito Corleone.
Or you could realize what those guys all have in common and be worried about the systems that enable them because the problem isn't a guy but a system enabling those guys to become everyone's problem.
I don't mind "Vito Corleone" joking about a volcano lair. I mind him interfering in my country's elections and politics. I shouldn't have to worry about the antics of a guy building rockets that explode and cars that can chop off your fingers because I live in a country that can protect me from those things becoming my problem, but because we have the same underlying systems I do have to worry about him because his political power is easily transferrable to any other country including mine.
This would still be true if it were a different guy. Heck, Thiel is landing contracts with his surveillance tech in my country despite the foreign politics of the US making it an obvious national and economic security risk and don't get me started on Bezos - there's plenty of "guys" already.
Sure, but "the systems" were built by such people and are mere evolutions of the previous "I have a bigger stick" of power politics from prior to the industrial revolution.
Not that you're wrong about the systems, just that if it was as easy as changing these systems because we can tell they're bad and allow corruption, the Enlightenment wouldn't have managed to mess up with both Smith and Marx.
There is lots of rationalizing going on in his article.
> I returned early from my paternity leave to help participate in the Codex launch.
10 years from now, the significance of having participated in that launch will be ridiculously small (unless you tell yourself that it was a pivotal moment of your life, even if it objectively wasn't) versus those first weeks with your newborn will never come back. Kudos to your partner though.
The very fact that he did this exemplifies everything that is wrong about the tech industry and our current society. He's praising himself for this instead of showing remorse for his failure as a parent.
Odd take. Openai gives 5 months of paternity leave and author is independently wealthy. What difference does it make between spending more time with a 4 month old vs a 4 year old? Or is your prescription that people should just be retiring once they have children?
> It is fairly rare to see an ex-employee put a positive spin on their work experience.
The opposite is true: Most ex-employee stories are overly positive and avoid anything negative. They’re just not shared widely because they’re not interesting most of the time.
I was at a company that turned into the most toxic place I had ever worked due to a CEO who decided to randomly get involved with projects, yell at people, and even fire some people on the spot.
Yet a lot of people wrote glowing stories about their time at the company on blogs or LinkedIn because it was beneficial for their future job search.
> It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.
For the posts that make HN I rarely see it that way. The recent trend is for passionate employees who really wanted to make a company work to lament how sad it was that the company or department was failing.
> The opposite is true: Most ex-employee stories are overly positive and avoid anything negative. They’re just not shared widely because they’re not interesting most of the time.
Yeah I had to re-read the sentence.
The positive "Farewell" post is indeed the norm. Especially so from well known, top level people in a company.
Allow me to propose a different rationalization: "yes I know X might damage some people/society, but it was not me who decided, and I get lots of money to do it, which someone else would do if not me."
I don't think people who work on products that spy on people, create addiction or worse are as naïve as you portrayed them.
> everyone I met there is actually trying to do the right thing" - yes! That's true at almost every company that ends up making morally questionable decisions!
The operative word is “trying”. You can “try” to do the right thing but find yourself restricted by various constraints. If an employee actually did the right thing (e.g. publish the weights of all their models, or shed light on how they were trained and on what), they get fired. If the CEO or similarly high-ranking exec actually did the right thing, the company would lose out on profits. So, rationalization is all they can do. “I'm trying to do the right thing, but.” “People don't see the big picture because they're not CEOs and don't understand the constraints.”
I’m not saying this about OpenAI, because I just don’t know. But Bond villains exist.
Usually the level 1 people are just motivated by power and money to an unhealthy degree. The worst are true believers in something. Even something seemingly mild.
> It is fairly rare to see an ex-employee put a positive spin on their work experience.
Sure, but this bit really makes me wonder if I'd like to see what the writer is prepared to do to other people to get to his payday:
"Nabeel Quereshi has an amazing post called Reflections on Palantir, where he ruminates on what made Palantir special. I wanted to do the same for OpenAI"
I agree with your points here, but I feel the need to address the final bit. This is not aimed personally at you, but at the pattern you described - specifically, at how it's all too often abused:
> Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!
Those are the easy cases, and correspondingly, you don't see much of those - or at least few are paying attention to companies talking like that. This is distinct from saying "X is going to directly benefit the society, and we're merely charging for it as fair compensation of our efforts, much like a baker charges you for the bread" or variants of it.
This is much closer to what most tech companies try to argue, and the distinction seems to escape a lot of otherwise seemingly sharp people. In threads like this, I surprisingly often end up defending tech companies against such strawmen - because come on, if we want positive change, then making up a simpler but baseless problem, calling it out, and declaring victory, isn't helping to improve anything (but it sure does drive engagement on-line, making advertisers happy; a big part of why press does this too on a routine basis).
And yes, this applies to this specific case of OpenAI as well. They're not claiming "LLMs are going to indirectly benefit the society because we're going to get rich off them, and then use that money to fund lots of nice things". They're just saying, "here, look at ChatGPT, we believe you'll find it useful, and we want to keep doing R&D in this direction, because we think it'll directly benefit society". They may be wrong about it, or they may even knowingly lie about those benefits - but this is not trickle-down economics v2.0, SaaS edition.
Well, as a reminder OpenAI has a non disparagement clause in their contracts, so the only thing you'll ever see from former employees is positive feedback.
> That's true at almost every company that ends up making morally questionable decisions! There's no Bond villain at the helm. It's good people rationalizing things
I mean, that's a leap. There could be a bond villain that sets up incentives such that people who rationalize the way they want is who gets promoted / their voice amplified. Just because individual workers generally seem like they're trying to do the best thing doesn't mean the organization is set up specifically and intentionally to make certain kinds of "shady" decisions.
> It goes like this: we're the good guys. If we were evil, we could be doing things so much worse than X! Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!
This is a great insight. But if we think a bit deeper about why that happens, I land on because there is nobody forcing anyone to do the right thing. Our governments and laws are geared more towards preventing people from doing the wrong thing, which of course can only be identified once someone has done the wrong thing and we can see the consequences and prove that it was indeed the wrong thing. Sometimes we fail to even do that.
> It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.
- Progress is iterative and driven by a seemingly bottom up, meritocratic approach. Not a top down master plan. Essentially, good ideas can come from anywhere and leaders are promoted based on execution and quality of ideas, not political skill.
- People seem empowered to build things without asking permission there, which seems like it leads to multiple parallel projects with the promising ones gaining resources.
- People there have good intentions. Despite public criticism, they are genuinely trying to do the right thing and navigate the immense responsibility they hold.
- Product is deeply influenced by public sentiment, or more bluntly, the company "runs on twitter vibes."
- The sheer cost of GPUs changes everything. It is the single factor shaping financial and engineering priorities. The expense for computing power is so immense that it makes almost every other infrastructure cost a "rounding error."
- I liked the take of the path to AGI being framed as a three horse race between OpenAI (consumer product DNA), Anthropic (business/enterprise DNA), and Google (infrastructure/data DNA), with each organisation's unique culture shaping its approach to AGI.
> I liked the take of the path to AGI being framed as a three horse race between OpenAI (consumer product DNA), Anthropic (business/enterprise DNA), and Google (infrastructure/data DNA)
Wouldn't want to forget Meta which also has consumer product DNA. They literally championed the act of making the consumer the product.
"Hey, Twitter vibes are a metric, so make sure to mention the company on Twitter if you want to be heard."
Twitter is a one-way communication tool. I doubt they're using it to create a feedback loop with users, maybe just to analyse their sentiment after a release?
The entire article reads more like a puff piece than an honest reflection. Those of us who live outside the US are more sceptical, especially after everything revealed about OpenAI in the book Empire of AI.
Engineers thinking they're building god is such a good marketing strategy. I can't overstate it. It's even difficult to be rational about it. I don't actually believe it's true, I think it's pure hype and LLMs won't even approximate AGI. But this idea is sort of half-immune to criticism or skepticism: you can always respond with "but what if it's true?". The stakes are so high that the potentially infinite payoff snowballs over any probabilities. 0.00001% multiplied by infinite is an infinite EV so you have to treat it like that. Best marketing, it writes itself.
Similar to Pascal's wager, which pretty much amounts to "yeah, God is probably not real, _but what if it is_? The utility of getting into heaven is infinite (and hell is infinitely negative), so any non-zero probability that God is real should make you be religious, just in case."
This is explicitly not the conclusion Pascal drew with the wager, as described in the next section of the Wikipedia article: "Pascal's intent was not to provide an argument to convince atheists to believe, but (a) to show the fallacy of attempting to use logical reasoning to prove or disprove God..."
I know you're not being serious but building AGI as in something that thinks like a human, as proven possible by millions of humans wandering all over the place is very different from "building god".
Except that humans cannot read millions of books (if not all books ever published) and keep track of massive amounts of information. AGI presuposes some kind of super human capabilities that no one human has. Whether that's ever accomplished remains to be seen, I personally am a bit skeptical that it will hapen in our lifetime but think it's possible in the future.
Not sure about that one. I do agree with the AI bros that, _if_ we build AGI, ASI looks inevitable shortly after, at least a "soft ASI". Because something with the agency of a human but all the knowledge of the world at its fingertips, the ability to replicate itself, think at order of magnitudes faster and paralelly on many things at the same time and modify itself... really looks like it won't stay comparable to a baseline human for long.
There was nothing hypothesized about next-token prediction and emergent properties (they didn't know scale would allow it to generalize for sure). What if it's true is part of LLMs story, there is a mystical element here.
Someone else can confirm, but from my understanding, no they did not know sentiment analysis, reasoning, few shot learning, chain of thought, etc would emerge at scale. Sentiment analysis was one of the first things they noticed a scaled up model could generalize. Remember, all they were trying to do was get better at next-token prediction, there was no concrete idea to achieve "instruction following", for example. We can never truly say going up another order of magnitude on the number of params won't achieve something (it could, for reasons unknown, just like before).
It is somewhat parallel to the story of Columbus looking for India but ending up in America.
Didn't it just get better at next token prediction? I don't think anything emerged in the model itself, what was surprising is how really good next token prediction itself is at predicting all kind of other things no?
The Schaeffer et al. "Mirage" paper showed that many claimed emergent abilities disappear when you use different metrics, what looked like sudden capability jumps were often artifacts of using harsh/discontinuous measurements rather than smooth ones.
But I'd go further: even abilities that do appear "emergent" often aren't that mysterious when you consider the training data. Take instruction following - it seems magical that models can suddenly follow instructions they weren't explicitly trained for, but modern LLMs are trained on massive instruction-following datasets (RLHF, constitutional AI, etc.). The model is literally predicting what it was trained on. Same with chain-of-thought reasoning - these models have seen millions of examples of step-by-step reasoning in their training data.
The real question isn't whether these abilities are "emergent" but whether we're measuring the right things and being honest about what our training data contains. A lot of seemingly surprising capabilities become much less surprising when you audit what was actually in the training corpus.
> The Codex sprint was probably the hardest I've worked in nearly a decade. Most nights were up until 11 or midnight. Waking up to a newborn at 5:30 every morning. Heading to the office again at 7a. Working most weekends.
There's so much compression / time-dilation in the industry: large projects are pushed out and released in weeks; careers are made in months.
Worried about how sustainable this is for its people, given the risk of burnout.
If anyone tried to demand that I work that way, I’d say absolutely not.
But when I sink my teeth into something interesting and important (to me) for a few weeks’ or months’ nonstop sprint, I’d say no to anyone trying to rein me in, too!
Speaking only for myself, I can recognize those kinds of projects as they first start to make my mind twitch. I know ahead of time that I’ll have no gas left the tank by the end, and I plan accordingly.
Luckily I’ve found a community who relate to the world and each other that way too. Often those projects aren’t materially rewarding, but the few that are (combined with very modest material needs) sustain the others.
The latter. I mean, I feel like a disproportionate number of folks who hang around here have that kind of disposition.
That just turns out to be the kind of person who likes to be around me, and I around them. It’s something I wish I had been more deliberate about cultivating earlier in my life, but not the sort of thing I regret.
In my case that’s a lot of artists/writers/hackers, a fair number of clergy, and people working in service to others. People quietly doing cool stuff in boring or difficult places… people whose all-out sprints result in ambiguity or failure at least as often as they do success. Very few rich people, very few who seek recognition.
The flip side is that neither I nor my social circles are all that good at consistency—but we all kind of expect and tolerate that about each other. And there’s lots of “normal” stuff I’m not part of, which I probably could have been if I had tried. I don’t know what that means to the business-minded people around here, but I imagine it includes things like corporate and nonprofit boards, attending sports events in stadia, whatever golf people do, retail politics, Society Clubs For Respectable People, “Summering,” owning rich people stuff like a house or a car—which is fine with me!
I don't need recovery time afterward (apart from sleep), but when I'm surrounded by people who do I want some equivalent compensation, not because I feel I need it, but because I feel they are slackers (not saying they are objectively slackers, just saying that's how it feels to me). many compromises need to be made when sprinting all out, and in the aftermath what is restorative to me is cleaning up the technical debt while it's fresh in my mind and I can't understand that other people don't want to do the same thing.
This guy who is already independently wealthy chose working 16-17h 7 days a week instead of raising his newborn child and thanks his partner for “childcare duties”. Pretty much tells you everything you need to know.
Yeah as someone who has a young child this entire post made me feel like I was taking crazy pills. Working this much with a newborn is toxic behavior and if a company demands it then it is toxic culture. And writing about it as anything but that feels like some combination of Stockholm syndrome, being a workaholic, and marketing spin.
Being passionate about something and giving yourself to a project can be amazing, but you need to have the bandwidth to do it without the people you care about suffering because of that choice.
>>Working this much with a newborn is toxic behavior and if a company demands it then it is toxic culture. And writing about it as anything but that feels like some combination of Stockholm syndrome, being a workaholic, and marketing spin.
It's not sustainable, at all, but if it's happening just a couple times throughout your career, it's doable; I know people who went through that process, at that company, and came out of it energized.
I couldn't imagine asking my partner to pick up that kind of childcare slack. Props to OP's wife for doing so, and I'm glad she got the callout at the end, but god damn.
I think Altman said in Lex F. podcast that he works 8 hours, 4 first one being the most productive ones and he doesn't believe CEO claiming they work 16 hours a day. Weird contrast to what described in the article. This confirms my theory that there are two types of people in startups: founders and everybody else, the former are there to potentially make a lot of money, and the later are there to learn and leave.
It's worst than that. Lots of power struggles and god-like egos. Altman called one of the employees "Einstein" on Twitter, some think they were chosen to transcend humanity, others believe they're at war with China, some want to save the world, others see it burn, and some just want their names up there with Gates and Jobs.
This is what ex-employees said in Empire of AI, and it's the reason Amodei and Kaplan left OpenAI to start Anthropic.
He references childcare and paternity leave in the post and he was a co-founder in a $3B acquisition. To me it seems it is a time-of-life/priorities decision not a straight up burnout decision.
Working a job like that would literally ruin my life. There's no way I could have time to be a good husband and father under those conditions, some things should not be sacrificed.
Many people are bad parents. Many are bad at their jobs. Many at bad at both. At least this guy is good at his job, and can provide very well for his family.
It is all relative. A workaholic seems pretty nice when compared to growing up with actual objectively bad parents, workaholics plus: addicts, perpetually drunk, gamblers, in jail, no shows for everything you put time into, competing with you when obtaining basic skills, abusing you for being a kid, etc.
There are plenty worse than that. The storied dramatic fiction parent missing out on a kid's life is much better than what a lot of children have.
Yet, all kids grow up, and the greatest factor determining their overall well-being through life is socioeconomic status, not how many hours a father was present.
Im very interested in that topic and haven’t made up my mind about what really counts in parenting.
You have sources for the claim about well-being (asking explicitly about mental well-being and not just material well-being) being more influenced by socioeconomic status and not so much by parental absence?
About the guy: I think if it’s just a one time thing it’s ok but the way he presents himself gives reason for doubt
A parent should provide their kids with opportunities to try new things. Sometimes this might require gently making a kid do something at least a few times until it's clear it's not something they are good at or interested in. Also deciding when to try something is important - kids might need to try it at different ages. And of course convincing and reassuring a kid might be necessary to try something they are afraid to do. Until the age of 12 or so, it's important to make it fun, at least initially.
It's debatable whether a parent always needs to "lead by example": for example, I've never played hockey, but I introduced my son to it, and he played for a while (until injuries made us reconsider and he stopped). For mental well-being, make sure to not display your worst emotions in front of your kids - they will definitely notice, and will probably carry it for the rest of their lives.
They were showered with assets for being a lucky individual in a capital driven society, time is interchangeable for wealth, as evidenced throughout history.
This guy is young. He can experience all that again, if it is that much of a failure, and he really wants to.
Sure, there are ethical issues here, but really, they can be offset by restitution, lets be honest.
My hot take is I don’t think burn out has much to do with raw hours spent working. I feel it has a lot more to do with sense of momentum and autonomy. You can work extremely hard 100 hour weeks six months in a row, in the right team and still feel highly energized at the end of it. But if it feels like wading through a swamp, you will burn out very quickly, even if it’s just 50 hours a week. I also find ownership has a lot to do with sense of burnout
And if the work you're doing feels meaningful and you're properly compensated. Ask people to work really hard to fill out their 360 reviews and they should rightly laugh at you.
At some level of raw hours, your health and personal relationships outside work both begin to wither, because there are only 24 hours in a day. That doesn’t always cause burnout, but it provides high contrast - what you are sacrificing.
Exactly this - if not at all about hours spent (at least that’s not a good metric; working less will benefit a burned out person; but the hours were not the root cause). The problem is lack of autonomy, lack of control over things you care about deeply. If those go out the window, the fire burns out quickly.
Imho when this happens it’s usually because a company becomes too big, and the people in control lack subject matter expertise, have lost contact with the people that drive the company, and instead are guided by KPIs and the rules they enforced grasping for that feeling of being in control.
i hope thats not a hot take because it's 100% correct.
people conflate the terms "burnout" and "overwork" because they seem semantically similar, but they are very different.
you can fix overwork with a vacation. burnout is a deeper existential wound.
my worst bout of burnout actually came in a cushy job where i was consistently underworked but felt no autonomy or sense of purpose for why we were doing the things we were doing.
2024 my wife and I did a startup together. We worked almost every hour we were awake, 16-18 hours a day, 7 days a week. We ate, we went for an hour's walk a day, the rest of the time I was programming. For 9 months. Never worked so hard in my life before. And, not a lick of burnout during that time, not a moment of it, where I've been burned out by 6 hour work days at other organizations. If you're energized by something, I think that protects you from burnout.
for the amount of money they are giving that is relatively easy, normal people are paid way less in harder jobs, for example, working in an Amazon Warehouse or doing door-to-door sales, etc.
I don't really have an opinion on working that much, but working that much and having to go into the office to spend those long hours sounds like torture.
Those that love the work they do don't burn out, because every moment working on their projects tends to be joyful. I personally hate working with people who hate the work they do, and I look forward to them being burned out
Sure, but this schedule is like, maybe 5 hours of sleep per night. Other than an extreme minority of people, there’s no way you can be operating on that for long and doing your best work. A good 8 hours per night will make most people a better engineer and a better person to be around.
"You don't really love what you do unless you're willing to do it 17 hours a day every day" is an interesting take.
You can love what you do but if you do more of it than is sustainable because of external pressures then you will burn out. Enjoying your work is not a vaccine against burnout. I'd actually argue that people who love what they do are more likely to have trouble finding that balance. The person who hates what they do usually can't be motivated to do more than the minimum required of them.
Weird how we went from like the 4 hour workweek and all those charts about how people historically famous in their field spent only a few hours a day on what they were most famous for, to "work 12+ hours a day or you're useless".
Also this is one of a few examples I've read lately of "oh look at all this hard work I did", ignoring that they had a newborn and someone else actually did all of the hard work.
I read gp’s formulation differently: “if you’re working 17 hours a day, you’d better stop soon unless you’re doing it for the love of doing it.” In that sense it seems like you and gp might agree that it’s bad for you and for your coworkers if you’re working like that because of external pressures.
I don’t delight in anybody’s suffering or burnout. But I do feel relief when somebody is suffering from the pace or intensity, and alleviates their suffering by striking a more sustainable balance for them.
I feel like even people energized by efforts like that pay the piper: after such a period I for one “lay fallow”—tending to extended family and community, doing phone-it-in “day job” stuff, being in nature—for almost as long as the creative binge itself lasted.
I would indeed agree with things as you've stated. I interpreted "the work they do" to mean "their craft" but if it was intended as "their specific working conditions" I can see how it'd read differently.
I think there are a lot of people that love their craft but are in specific working conditions that lead to burnout, and all I was saying is that I don't think it means they love their craft any less.
> Worried about how sustainable this is for its people, given the risk of burnout.
Well given the amount of money OpenAI pays their engineers, this is what it comes with. It tells you that this is not a daycare or for coasters or for the faint of heart, especially at a startup at the epicenter of AI competition.
There is now a massive queue of lots of desperate 'software engineers' ready to kill for a job at OpenAI and will not tolerate the word "burnout" and might even work 24 hours to keep the job away from others.
For those who love what they do, the word "burnout" doesn't exist for them.
I am not saying that’s easy work but most motivated people do this. And if you’re conscious of this that probably means you viewed it more as a job than your calling.
>Thanks to this bottoms-up culture, OpenAI is also very meritocratic. Historically, leaders in the company are promoted primarily based upon their ability to have good ideas and then execute upon them. Many leaders who were incredibly competent weren't very good at things like presenting at all-hands or political maneuvering. That matters less at OpenAI then it might at other companies. The best ideas do tend to win. 2
This sets off my red flags: companies that say they are meritocratic, flat etc., often have invisible structures that favor the majority. Valve Corp is a famous example for that where this leads to many problems, see https://www.pcgamer.com/valves-unusual-corporate-structure-c...
>It sounds like a wonderful place to work, free from hierarchy and bureaucracy. However, according to a new video by People Make Games (a channel dedicated to investigative game journalism created by Chris Bratt and Anni Sayers), Valve employees, both former and current, say it's resulted in a workplace two of them compared to The Lord of The Flies.
I think in this structure people only think locally and they are not concerned with the overall mission of the company and do not actively think about morality of the mission or if they are following it.
In my experience, front-line and middle managers will penalize workers that stray from their explicit goals because they think something else more readily contributes to the company’s mission.
Kind of sounds like a traditional public company is a constitutional monarchy, not always the best but at least there's a balance of interests. While a private company could either be an autocracy or oligarchy where sucking up and playing tribal politics is the only way to survive.
Anyone tried setting up a modestly sized tech company where employees are randomly placed into various seniority roles at the start of each year? Of course considering capabilities and some business continuity concerns…
Could work with a bunch of similarly skilled people in a narrow niche
Wild that OpenAI is changing so much that you can post about how things have radically changed in a year, and consider yourself a long-timer after < 16 months. I'm highly skeptical that an org this big is based on merit and there wasn't a lot of political maneuvering. You can have public politics or private politics, but no politics doesn't exist - at least after you hit <some> number of people where "some" is definitely < the size of OpenAI. All I hear about OpenAI is politics these days,
This was good, but the one thing I most wanted to know about what it's like building new products inside of OpenAI is how and how much LLMs are involved in their building process.
Yes, same, that's a fascinating question that people are pretty tight-lipped about.
Note he was specifically on the team that was launching OpenAI's version of a coding agent, so I imagine the numbers before that product existed could be very different to the numbers after.
Absolutely hilarious to assert that "everyone at OpenAI is trying to do the right thing" and then compare it to Los Alamos, the creators of the nuclear bomb.
- The company was a little over 1,000 people. One year later, it is over 3,000.
- Changes direction on a dime.
- Very secretive place.
With the added "everything is a rounding error compared to GPU cost" and "this creates a lot of strange-looking code because there are so many ways you can write Python".
Doesn't it bother anybody that their product heavily relies on FastAPI according to this post yet they haven't donated to the project or aren't listed as sponsors?
>Safety is actually more of a thing than you might guess
Considering all the people who led the different safety teams have left or been fired, Superalignment has been a total bust and the various accounts from other employees about the lack of support for safety work I find this statement incredibly out of touch and borderline intentionally misleading.
> Good ideas can come from anywhere, and it's often not really clear which ideas will prove most fruitful ahead of time.
Is that why they have a dozens of different models?
> Many leaders who were incredibly competent weren't very good at things like presenting at all-hands or political maneuvering.
I don't think the Sam/Board drama confirms this.
> The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement. Anybody in the world can jump onto ChatGPT and get an answer, even if they aren't logged in.
Did you thank your OpenAI overlords for letting you access their sacred latest models?
+-+-
This reads like an Ad for Open AI or an attempt by the author to court them again? I am not sure how anyone can take his words seriously.
For a company that has grown so much in such a short time, I continue to be surprised by its lack of technical writers. Saying docs could be better is an euphemism, but I still can't find fellow tech writers working there. Compare this with Anthropic and its documentation.
I don't know what's the rationale for not hiring tech writers other than nobody suggesting it yet, which is sad. Great dev tools require great docs, and great docs require teams that own them and grow them as a product.
The higher ups don't think there's value in that. Back at DigitalOcean they had an amazing tech writing team, with people with years of experience, doing some of the best tech docs in the industry, when the layoffs started the writing team was the first to be cut.
I didn't realise that team at DO was let go, what a horrible decision - the SERP footprint of DO was immense and the quality of the content was fantastic.
> The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement. Anybody in the world can jump onto ChatGPT and get an answer, even if they aren't logged in. There's an API you can sign up and use–and most of the models (even if SOTA or proprietary) tend to quickly make it into the API for startups to use.
The comparison here should clearly be with the other frontier model providers: Anthropic, Google, and potentially Deepseek and xAI.
Comparing them gives the exact opposite conclusion - OpenAI is the only model provider that gates API access to their frontier models behind draconic identity verification (also, Worldcoin anyone?). Anthropic and Google do not do this.
OpenAI hides their model's CoT (inference-time compute, thinking). Anthropic to this day shows their CoT on all of their models.
Making it pretty obvious this is just someone patting themselves on the back and doing some marketing.
Yes, also OpenAI being this great nimble startup that can turn on a dime, while in reality Google reacted to them and has now surpassed them technically in every area, except image prompt adherence.
Anthropic has banned my own accounts before I used them for violating ToS. Appeals do nothing. Only when I started using a Google login did they stop banning them. This isn’t an OpenAI-only problem.
What do people not like about Azure IAM? That's the one I'm most familiar with, and I've always thought it was decent, pretty vanilla.
When I go to AWS it looks similar except role assignments can't be scoped, so needs more duplication and maintenance. In that way Azure seems nicer. In everything else, it seems pretty equivalent.
But I see it catching flak occasionally on HN, so curious what others dislike.
> There's a corollary here–most research gets done by nerd-sniping a researcher into a particular problem. If something is considered boring or 'solved', it probably won't get worked on.
This is a very interesting nugget, and if accurate this could become their Achilles heel.
It's not "their" Achilles heel. It's the Achilles heel of the way humans work.
Most top-of-their-field researchers are on top of their field because they really love it, and are willing to sink insane amount of hours into doing things they love.
If anything about OpenAI which should bothers people is how they fake to be blind to the consequences because of "the race".
Leveraging the decision IF and WHAT should be done to the top heads only never worked well.
Are any of these causal to OpenAI's success? Or are they incidental? You can throw all of this "culture" into an org but I doubt it'd do anything without the literal world-changing technology the company owns.
> As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing
I doubt many people would say something contrary to this about their (former) colleagues, which means we should always take this with a (large) grain of salt.
Do I think (most) AT&T employees wanted to let the NSA spy on us? Probably not. Google engineers and ICE? Palantir and.. well idk i think everyone there knows what Palantir does.
> The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement.
That is literally how openAI gets data for fine-tuning it's models, by testing it on real users and letting them supply data and use cases. (tool calling, computer use, thinking, all of these were championed by people outside and they had the data)
> An unusual part of OpenAI is that everything, and I mean everything, runs on Slack.
Not that unusual nowadays. I'd wager every tech company founded in the last ~10 years works this way. And many of the older ones have moved off email as well.
This is silicon valley culture on steroids: I really have to question if it is positive for any involved party. Codex almost has no mindshare and rightly so. It's a textbook also ran, except it came from the most dominant player and was outpaced by Claude code on the order of weeks.
Why go through all that? Instead what would have been a much better scenario is openai carefully assessing different approaches to agentic coding and releasing a more fully baked product with solid differentiation. Even Amazon just did that with Kiro
What I read in this blogpost is a description of how every good research organization works, from academia to private labs. The command and control, centrally planned approach doesn't work.
I’m at a point my life and career where I’d never entertain working those hours. Missed basketball games, seeing kids come home from school, etc. I do think when I first started out, and had no kiddos, maybe some crazy sprints like that would’ve been exhilarating. No chance now though
wham.
thanks for sharing anecdotal episodes from OAI's inner mecahnism from an eng perspective. I wonder if OAI wouldn't be married to Azure would the infra be more resilient, require less eng effort to invent things to just run (at scale).
What i haven't seen much is the split between eng and research and how people within the company are thinking about AGI and the future, workforce, etc.
Is it the usual SF wonderland or is there an OAI specific value alignment once someone is working there.
Yeah legit interested in their toolchain. I tried Pants and had a bad experience. Bazel is too heavyweight imo and doesn't deal with a variety of dependencies well.
Chunking the codebase that you entirely own into packages is as if you're intentionally wanting to make your life miserable by imposing the same kind of volatility that you would otherwise find in the development process of building the Linux distribution. It's a misnomer.
Good writing, enjoyed that article. Also I guess it looks like there was more time spent writing this article than actually working at OpenAI? 1 year tenure and a paternity leave?
> As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing.
To quote Jonathan Nightingale from his famous thread on how Google sabotaged Mozilla [1]:
--- start quote ---
The question is not whether individual sidewalk labs people have pure motives. I know some of them, just like I know plenty on the Chrome team. They’re great people. But focus on the behaviour of the organism as a whole. At the macro level, google/alphabet is very intentional.
Granted the "OpenAI is not a monolith" comment, interesting that use of AI assisted coding was a curious omission from the article -- no mention if encouraged or discouraged.
> What's funny about this is there are exactly three services that I would consider trustworthy: Azure Kubernetes Service, CosmosDB (Azure's document storage), and BlobStore.
CosmosDB is trustworthy? Everyone I know that used CosmosDB ended up rewriting their code because of throttling.
It's pretty heavily used within Azure itself, and my old team didn't have any issues, even though we had a high volume service with stored procs, triggers, list indexes, etc. And it was cheaper and faster than the SQL Server instance it replaced (granted, the original Sql-based app had tons of big joins and wasn't designed to scale).
I know a few other teams when I was working elsewhere that had to migrate off Spanner due to throttling and random hiccups, though if Google uses it for zanzibar, they must not have that problem internally. Maybe all these companies increase throttling limits for first party use cases.
My current team uses dynamo, which has also given throttling issues, but generally only when they try to do things that it's not designed for (bulk updating a bunch of records in the same partition). Other than that, it seems reliable (incredibly consistent low latencies) and a bit cheaper than my experience with cosmos, though with fewer features.
They all seem to have their own pros and cons in my experience.
>It's hard to imagine building anything as impactful as AGI
>...
>OpenAI is also a more serious place than you might expect, in part because the stakes feel really high. On the one hand, there's the goal of building AGI–which means there is a lot to get right.
I'm kind of surprised people are still drinking this AGI Koolaid
for real. same. the level of delusion. i think what'll happen is they'll get some really advanced agents that can effectively handle most general tasks and they'll call it AGI and say they've done it. it won't really be AGI, but a lot of people will have bought into the lie thanks to the incredibly convincing facsimile they'll have created.
While their growth is faster and technology different, the atmosphere feels very much like AWS back in 2014. I stayed for 8 years because I enjoyed it so much.
I now have developed a hate with a small sprinkle of love relationship with AI.
This past week I canceled my $20 subscription to GPT, advocated my friends do the same (i got them hooked) and just will be using Gemini from now on. It can create AI maps instantly for road trip travel routes to planning creek tubing trips and more. GPT does not do maps and I was paying $20 for it while Gemini is free? Bye!
Further and more important this guy says in his blog he is happy to help with the destruction of our (white collar) society that will cause many MANY financial to emotional pain while he lives high off the hog? Impending 2030 depression .. 100 years later Im unfortunately betting!
Now AI could indeed help us cure disease but if the major are destitute while a few hundreds or so live high of the hog the benefits of AI are canceled out.
AI can definitely do the job ten people use to yet NOW its just one person typing into a text prompt to complete the tasks ten use.
Why are we here..sprinting towards this goal of destruction let China destory themselves!!!
"Safety is actually more of a thing than you might guess if you read a lot from Zvi or Lesswrong. There's a large number of people working to develop safety systems. Given the nature of OpenAI, I saw more focus on practical risks (hate speech, abuse, manipulating political biases, crafting bio-weapons, self-harm, prompt injection) than theoretical ones (intelligence explosion, power-seeking). That's not to say that nobody is working on the latter, there's definitely people focusing on the theoretical risks. But from my viewpoint, it's not the focus."
This paragraph doesn't make any sense. If you read a lot of Zvi or LessWrong, the misaligned intelligence explosion is the safety risk you're thinking of! So readers "guesses" are actually right that OpenAI isn't really following Sam Altman's:
"Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could."[0]
What I really wanted to know if OpenAI(and other labs for that matter) actually use their own products and not just casually but make LLM a core of how they operate. For example: using LLM for coding in prod, training/fine-tuning internal models for aligning on the latest updates, finding answer etc. Do they put their money where their mouth is, do LLMs help with productivity? There is no mention of it in the article, so I guess they don't?
I don’t know, but I’d guess they are using them heavily, though in a piecemeal fashion.
As impressive as LLMs can be at one-shotting certain kinds of tasks, working in a sprawling production codebase like the one described with tight performance constraints, subtle interdependencies, cross-cutting architectural concerns, etc. still requires a human driving most of the time. LLMs help a lot for this kind of work, but the human is either carefully assimilating their output or carefully choosing spots where (with detailed prompts) they can generate usable code directly.
Again, just a guess, but this my impression of how experienced engineers (including myself) are using LLMs in big/nontrivial codebases, and I’ve seen no indication that engineering processes at the labs are much different from the wider industry.
this post was such a brilliant read. to read about how they still have a YC-style startup culture, are meritocratic, and people get to work on things they find interesting.
as an early stage founder, i worry about the following a lot.
- changing directions fast when i lose conviction
- things breaking in production
- and about speed, or the lack of it
I learned to actually not worry about the first two.
But if OpenAI shipped Codex in 7 weeks, small startups have lost the speed advantage they had. Big reminder to figure out better ways to solve for speed.
That’s how I imagined it, kind of a hybrid of what I’ve seen called Product Marketing Manager and Product Analyst, but other replies and OpenAI job postings indicate maybe it’s a different role, more hands on building, getting from research to consumer product maybe?
He joined last year May and left recently. About one year of stay.
I wonder one year is enough time for programmers to understand codebase, let alone meaningfully contributing patches? But then we see that job hopping is increasing common, which results in the drop in product qualities. I wonder what values are the job hoppers adding to the company.
> As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing.
I appreciate where the author is coming from, but I would have just left this part out. If there is anything I've learned during my time in tech (ESPECIALLY in the Bay Area) it's that the people you didn't meet are absolutely angling to do the wrong thing(TM).
I've been in circles with very rich and somewhat influential tech people and it's a lot of talk about helping others, but somehow beneath the veneer of the talk of helping others you notice that many of them are just ripping people off, doing coke and engaging in self-centered spiritual practices (especially crypto people).
I also don't trust that people within the system can assess if what they're doing is good or not. I've talked with higher ups in fashion companies who genuinely believe their company is actually doing so much great work for the environment when they basically invented fast-fashion. I've felt it first hand personally how my mind slowly warped itself into believing that ad-tech isn't so bad for the world when I worked for an ad-tech company, and only after leaving did I realize how wrong I was.
I agree. I've met some very wealthy people before and when you're outside of the bubble you start to see how $$$ helps a lot to justify anything you do as helping people. A lot of wealthy people will say "look I contribute to this cause!" as an indulgence in the religious sense to "counteract" much of what they do in their day to day or the raison d'etre of their work.
It's weird to say, but some people simply are not tuned in to the long term ramifications of their work. The "fuck you I got mine" mentality is even at play in many outwardly appearing progressive communities where short term gain is a moral imperative above doing good.
And it's not just about some people doing good and others doing bad. Individual employees all doing the "right thing" can still be collectively steered in the wrong direction by higher ups. I'd say this describes the entirety of big tech.
When your work provides lunch in a variety of different cafeterias all neatly designed to look like standalone restaurants, directly across from which is an on-campus bank that will assist you with all of your financial needs before you take your company-operated Uber-equivalent to the next building over and have your meeting either in that building's ballpit, or on the tree-covered rooftop that - for some reason - has foxes on top, it's easy to focus only on the tiny "good" thing you're working on and not the steaming hot pile of garbage that the executives at your company are focused on but would rather you not see.
Edit: And that's to say nothing of the very generous pay...
Yes. We already know that Altman parties with extremists like Yarvin and Thiel and donates millions to far-right political causes. I’m afraid the org is rotten at its core. If only the coup had succeeded.
>It's hard to imagine building anything as impactful as AGI,
Where is this AGI that you've built then? The reason for the very existence of that term is an acknowledgement that what's hyped today as AI isn't actually what AI used to mean, but the hype cycle VC money depends on using the term AI, so a new term was invented to denote the thing the old term used to denote. Do we need yet another term because AGI is about to get burned the same way?
> and LLMs are easily the technological innovation of the decade.
Sorry, what? I'm sure it feels that way from some corners of that particular tech bubble, but my 73 year old mom's life is not impacted by LLMs at all - well, except for when she opens her facebook feed once a month and gets blasted with tons of fake BS. Really something to be proud of for us as an industry? A tech breakthrough of the last decade that might have literally saved her life were mRNA vaccines though, and I could likely come up with more examples if I thought about it for more than 3 seconds.
One thing I was interested to read but didn't find in your post is: does everyone believe in the vision that the leadership has shared publicly, e.g. [1]? Is there some skepticism that the current path leads to AGI, or has everyone drunk the Kool-Aid? If there is some dissent, how is it handled internally?
Not the author, but I work at OpenAI. There are wide variety of viewpoints and it's fine for employees to disagree on timelines and impact. I myself published a 100-page paper on why I think transformative AGI by 2043 is quite unlikely (https://arxiv.org/abs/2306.02519). From informal discussion, I think the vast majority of employees don't think that we're mere years from a post-scarcity utopia where we can drink mai tais on the beach all day. But there is a lot of optimism about the rapid progress in AI, and I do think that it's harder to forecast the path of a technology that has the potential to improve itself. So much depends on your definition of AGI. In a sense, GPT-4 is already AGI in the literal sense that it's an artificial intelligence with some generality. But in the sense of automating the economy, it's of course not close.
The hype around this tech strongly promotes the narrative that we're close to exponential growth, and that AGI is right around the corner. That pretty soon AI will be curing diseases, eradicating poverty, and powering humanoid robots. These scenarios are featured in the AI 2027 predictions.
I'm very skeptical of this based on my own experience with these tools, and rudimentary understanding of how they work. I'm frankly even opposed to labeling them as intelligent in the same sense that we think about human intelligence. There are certainly many potentially useful applications of this technology that are worth exploring, but the current ones are awfully underwhelming, and the hype to make them seem more than they are is exhausting. Not to mention that their biggest potential to further degrade public discourse and overwhelm all our communication channels with even more spam and disinformation is largely being ignored. AI companies love to talk about alignment and safety, yet these more immediate threats are never addressed.
Anyway, it's good to know that there are disagreements about the impact and timelines even inside OpenAI. It will be interesting to see how this plays out, if nothing else.
My definition: AGI will be here when you can put it in a robot body in the real word and interact with it like you would a person. Ask it to drive your car or fold your laundry or make a mai tai and if it doesn’t know how to do that, you show it, and then it can.
Maybe I'm biased, but I actually think it's a pretty good definition, as definitions go. All of our narrow measures of human intelligence that we might be tempted to use - win at games, solve math problems, ace academic tests, dominate at programming competitions - are revealed as woefully insufficient as soon as an AI beats them but fails to generalize far beyond. But if you have an AI that can generate lots of revenue doing a wide variety of real work, then you've probably built something smart. Diverse revenue is a great metric.
I also find it interesting that the definition always includes the "outperforms humans" qualifier. Maybe our first AGIs will underperform humans.
Imagine I built a robot dog that behaved just like a biological dog. It bonds with people, can be trained, shows emotion, communicates, likes to play, likes to work, solves problems, understands social cues, and is loyal. IMHO, that would qualify as an AGI even though it isn't writing essays or producing business plans.
> IMHO, that would qualify as an AGI even though it isn't writing essays or producing business plans.
I'm not sure it would, though. The "G" in AGI stands for "General", which a dog obviously can't showcase. The comparison must be done against humans, since the goal is to ultimately have the system perform human tasks.
The definition mentioned by tedsanders seems adequate to me. Most of the terms are fuzzy ("most", "outperform"), but limiting the criteria to economic value narrows it down to a measurable metric. Of course, this could be exploited by building a system that optimizes for financial gain over everything else, but this wouldn't be acceptable.
The actual definition is not that important, IMO. AGI, if it happens, won't appear suddenly from a singular event, but as a gradual process until it becomes widely accepted that we have reached it. The impact on society and our lives would be impossible to ignore at that point. The problem with this is that along the way there will be charlatans and grifters shouting from the rooftops that they've already cracked it, but this is nothing new.
I would say... yes. But with the strong caveat that when used within the context of AGI, the individual/system should be able to showcase that intelligence, and the results should be comparable to those of a neurotypical adult human. Both a dog and a toddler can show signs of intelligence when compared to individuals of their similar nature, but not to an adult human, which is the criteria for AGI.
This is why I don't think that a system that underperforms the average neurotypical adult human in "most" cognitive tasks would constitute AGI. It could certainly be considered a step in that direction, but not strictly AGI.
But again, I don't think that a strict definition of AGI is helpful or necessary. The impact of a system with such capabilities would be impossible to deny, so a clear definition doesn't really matter.
> Would that be services generally provided by government?
Most services provided by governments are economically valuable, as they provide infrastructure that allow individual actors to perform better, increasing collective economic output. (For e.g. high-expenditure infrastructure it could be quite easily argued though that they are not economically profitable.)
Externally there's no rigorous definition as to what constitutes AGI, so I'd guess internally it's not one monolithic thing they're targeting either. You'd need everyone to take a class about the nature of intelligence first, and all the different kinds of it just to begin with. There's undoubtedly dissent internally as to the best way to achieve chosen milestones on the way there, as well as disagreement that those are the right milestones to begin with. Think tactical disagreement, not strategic. If you didn't think that AGI were ever possible with LLMs, would you even be there to begin with?
Well, Sam Altman has a clear definition of ASI, and AGI is something they've been thinking about for a long time, so presumably they must have some accepted definition of it.
My question was whether everyone believes this vision that ASI is "close", and more broadly whether this path leads to AGI.
> If you didn't think that AGI were ever possible with LLMs, would you even be there to begin with?
People can have all sorts of reasons for working with a company. They might want to work on cutting-edge tech with smart people and infinite resources, for investment or prestige, but not necessarily buy into the overarching vision. I'm just wondering whether such a profile exists within OpenAI, and if so, how it is handled.
My biggest problem with these new companies is their core philosophy.
First, these companies generate their own demand — natural demand for their products rarely exists. Therefore, they act more like sellers than developers.
Second, they always follow the same maxim: "What's the next logical step?" This naturally follows from the first premise, because this allows you to ignore everything "real". You are simply bound to logic. They have no "problems" to solve, yet they offer you solutions - simply as a logical consequence of their own logic. Has anyone ever actually asked if coders would use agents if it meant losing their jobs?
Thirdly, this naturally brings to light the B2B philosophy. The customer is merely a catalyst that will eventually become superfluous.
Fourth, the same excuse and ignorance of the form "(we don't know what we are doing, but) time will tell". What if time tells you "this is bad and you should and could have known better?"
- no such thing as open ai as a decision unit, there are people at the top who decide + shareholder pressure
- a narcissist optimizes for himself and has no affective empathy: cruel, extremely selfish, image oriented, power hungry, liar etc.
- having no structure means having a hidden structure and hence real power of the few above, no accountability (narcissists love this too)
- framing this meritocracy is a positive framing, it is very easy to hide incompetence
- people want to do good, great, perfect naive and you can utilize this motivation to let them burn out and work for YOUR goals as a leader... another narcissist is great doing this to people and the richest man in the world per share price
all in all, having this kind of mindset is good for start ups or get the lucky punch, but AGI will be brought by Anthropic, Google and Ilya :) you will not have series of lucky punches, you have to have a direction
I think Sam Altman, a terrible narcissist, uses open AI to feel great and he has no strategy but using others for their own benefit, because narcissists dont care, they just care about their image and power... and that is why open AI goes down... bundling with Microsoft was a big red flag in the first place...
when i think of openAI, it is a bit like Netscape Navigator + Internet Explorer in one :)
Interesting that so many folks from Meta joined OpenAI - but Meta wasn't really able to roll its own competitive foundational model, so is that a bad sign?
Kind of interesting that folks aren't impressed by Azure's offering. I wonder if OpenAI is handicapped by that as well, compared to being on AWS or GCP.
> As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing
Of course they are. People in orgs like that are passionate, they want to work on the tech cause LLMs are once-in-a-lifetime tech breakthrough. But they don’t realize enough that they’re working for bad people. Ultimately all of that tech is in the hands of Altman, and that guy hasn’t proven to be the saint he hopes to become.
I definitely didn't get that feeling. There was a whole section about how their infra resembles Meta and they've had excellent engineers hired from Meta.
It would be interesting to read the memoirs of former OpenAI employees that dive into whether they thought the company was on the right track towards AGI. Of course, that’s an NDA violation at best.
It sounds to me in contrast to the grandiose claims OpenAI tries to make about its own products - it views AI as 'regular technology', and is pragmatically tries to build viable products using it.
> It's hard to imagine building anything as impactful as AGI, and LLMs are easily the technological innovation of the decade.
I really can't see a person with at least minimal self-awareness talking their own work up this much. Give me a break dude. Plus, you haven't built AGI yet.
Can't believe there's so little critique of this post here. It's incredibly self-serving.
He joins a proven unicorn at its inflection point and then leaves mere days after hitting his vesting cliff. All of this "learning" and "experience" talk is sopping wet with cynicism.
He co-founded and sold Segment. You think he was just at OpenAI to collect a check? He lays out exactly why he joined OpenAI and why he's leaving. If you think everyone does things only for cynical reasons, it might be a reflection more of your personal impulses than others.
Just because someone claims they are speaking in good faith doesn’t mean we have to take their word for it. Most people in tech dealing with big money are doing it for cynical reasons. The talk of changing the world or “doing something hard” is just marketing typically.
Calvin works incredibly hard and has very little ego. I was surprised he joined OpenAI since he's loaded from the Segment acquisition, but if anyone it makes sense he would do this. He's always looking to find the hardest problem and work on it.
That's what he did at Segment even in the later stages.
Newborns need constantly mom, not dad. Moms need husbands or their moms to help. The way it works is you agree what to do as a family (to do it or not to do it) and everybody is happy with their lives. You can be a great dad and husband and still do all of it when it makes sense and your wife supports it etc. Not having kids in the first place could be considered ego driven, not this.
No, he's right. I just went through newborn phase right now and the only person that needed is mom. Kid wanted nothing to do with me. He just wanted food and sleep.
I appreciate the edit, but "sopping wet with cynicism" still breaks the site guidelines, especially this one: "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
Understood, in the future I will refrain from questioning motives in featured articles. I can no longer edit my post but you may delete or flag it so that others will not be exposed to it.
Given that he leaves OpenAI almost immediately after hitting his 25% vesting cliff, it seems like his employment at OpenAI and this blog post (which makes him and OpenAI look good while making the reader feel good) were done cynically. I.e. primarily in his self-interest. What makes it even worse is his stated reason for leaving:
> It's hard to go from being a founder of your own thing to an employee at a 3,000-person organization. Right now I'm craving a fresh start.
This is just wholly irrational for someone whose credentials indicate someone who is capable of applying critical thinking towards accomplishing their goals. People who operate at that level don't often act on impulse or suddenly realize they want to do something different. It seems much more likely he intentionally planned to give himself a year of vacation at OpenAI, which allows him to hedge a bit while taking a breather before jumping back into being a founder.
Is this essentially speculation? Yes. Is it cynical to assume he's acting cynically? Yes. Speculation on his true motives is necessary because otherwise we'll never get confirmation, short of him openly admitting to it (which is still fraught). We have to look at behaviors and actions and assess likelihoods from there.
There's nothing cynical about leaving a job after cliffing. If a company wants a longer commitment than a year before issuing equity, it can set a longer cliff. We're all adults here.
I don't see anything interesting about that detail; you keep trying to make something out of it, but there's nothing there to talk about.
There might be some marginal drama to scrape up here if the post was negative about OpenAI (I'd still be complaining about trying to whip up drama where there isn't any), but it's kind of glowing about them.
Well now the goalpost has shifted from "it's not cynical" to "even if it is cynical it doesn't matter" and dang has already warned me so I'm hesitant to continue this thread. I'll just say that once you recognize that a lot of the fluff in this article is cynically motivated, it reduces your risk of giving the information presented more meaning than is really there.
I remember this being common business practice for written communication (email, design documents) circa 20 years ago, so that people at least read the important points, or can quickly pick them out again later.
Possibly the dumbest, blandest, most annoying kind of cultural transference imaginable. We dreamed of creating machines in our image, and now we're shaping ourselves in the image of our machines. Ugh.
I think we’ve always shaped ourselves based what we’re capable of building. Think of how infrastructure such as buildings and roadways shape our lives within them. What I do agree with you, is how LLMs are shaping our mental thought how we are offloading a lot of our mental capacities with blind trust in the LLM output.
I'm 50, worked at few cool places and lots of boring ones. to paraphrase, Tolstoy tends to be right -all happy families are similar and unhappy families are unhappy in unique ways
OpenAI is currently selected for the brightest and young excited minds, (and a lot of money).. bright, young (as in full of energy) and excited people will work well anywhere- esp if given a fair amount of autonomy.
Young people talking about how hard they worked is not a sign of a great corp culture, just a sign that they are in the super excited stage of their careers
In the long run who knows, I tend to view these companies as groups of like minded people and groups of people change and the dynamic changees overnight -so if they can sustain that culture sure, but who knows..
I said this elsewhere on the thread and so apologize for repeating, but: I know mid-career people working at this firm who have been through these conditions, and they were energized by the experience. They're shipping huge stuff that tens of millions of people will use almost immediately.
The cadence we're talking about isn't sustainable --- has never been sustained anywhere --- but if insane sprints like this (1) produce intrinsically rewarding outcomes and (2) punctuate otherwise-sane work conditions, they can work out fine for the people involved.
It's completely legit to say you'd never take a job where this could be an expectation.
On one hand, yes. But on the other hand, he's still in his 30s. In most fields, this would be considered young / early career. It kind of reinforces the point that bright, young people can get a lot done in the tech world.
Calvin is loaded from the Segment exit, he would not work if he wasn't excited about the work. The other founders just went on to do their own thing or non-profits.
I worked there for a few years and Calvin is definitely more of the grounded engineering guy. He would introduced him as an engineer and just get talking code. He would spend most of his time with the SRE/core team trying to tackle the hardest technical problem at the company.
This is a politically correct farewell letter. Obviously something we little people who need jobs have to resort to so the next HR manager doesn't think we are a risk to stock valuation. For a deeper understanding, read Empire of AI by Karen Hao. She defrocks Sam Altman to reveal he is just another human. Like Steve Jobs, he is an adept salesman appealing to the naïve altruistic sentiments of humans while maintaining his singular focus on scale. Not so different from the archetype of Rockefeller in his pursuit of monopoly through scale using any means, sam is no different than google which even forgot its own rallying cry ‘dont be evil’. Other actors in the story seem to have been infected by the same meme virus, leaving openAI for their own empires- Musk left after he and altman conflicted over who would be CEO.(birth of xAI). Amodei, his sister and others left to start anthropic. Sutskever left to start ‘safe something or other’(smacks of the same misdirection sam used when openAI formed as a nonprofit ) giving the idea of a nonprofit a mantle of evil since OPENAI has pivoted to profit.
The bottom line is that scaling requires money and the only way to get that in the private sector is to lure those with money with the temptation they can multiply their wealth.
Things could have been different in a world before financial engineers bankrupted the US (the crises of enron, salomon bros, 2008 mortgage debacle all added hundreds of billions to us debt as the govt bought the ‘too big to fail’ kool-aid and bailed out wall street by indenturing main street). Now 1/4 of our budget is simply interest payment on this debt. There is no room for govt spending on a moonshot like AI. This environment in 1960 would have killed Kennedy’s inspirational moonshot of going to the moon while it was still an idea in his head in his post coital bliss with Marilyn at his side.
Today our govt needs money just like all the other scrooge-infected players in the tower of debt that capitalism has built.
Ironically it seems china has a better chance now. It seems its release of deep seek and the full set of parameters is giving it a veneer of altruistic benevolence that is slightly more believable than what we see here in the west. China may win simply on thermodynamic grounds. Training and research in DL consumes terawatt hours and hundreds of thousands of chips. Not only are the US models on older architectures (10-100x more energy inefficient) but the ‘competition’ of multiple players in the US multiplies the energy requirements.
Would govt oversight have been a good thing? Imagine if General Motors, westinghouse, bell labs, and ford competed in 1940 each with their own manhattan project to develop nuclear weapons ? Would the proliferation of nuclear have resulted in human extinction by now?
Will AI’s contribution to global warming be just as toxic global thermonuclear war?
These are the questions that come to mind after Hao’s historic summary.
> The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement. Anybody in the world can jump onto ChatGPT and get an answer, even if they aren't logged in.
I would argue that there are very few benefits of AI, if any at all. What it actually does is create a prisoner's dilemma situation where some use it to become more efficient only because it makes them faster and then others do the same to keep up. But I think everyone would be FAR better off without AI.
What keeping AI free for everyone is akin to is keeping an addictive drug free for everyone so that it can be sold in larger quantities later.
One can argue that some technology is beneficial. A mosquito net made of plastic immediately improves one's comfort if out in the woods. But AI doesn't really offer any immediate TRUE improvement of life, only a bit more convenience in a world already saturated in it. It's past the point of diminishing returns for true life improvement and I think everyone deep down inside knows that, but is seduced by the nearly-magical quality of it because we are instinctually driven to seek out advantags and new information.
"I would argue that there are very few benefits of AI, if any at all."
OK, if you're going to say things like this I'm going to insist you clarify which subset of "AI" you mean.
Presumably you're OK with the last few decades of machine learning algorithms for things like spam detection, search relevance etc.
I'll assume your problem is with the last few years of "generative AI" - a loose term for models that output text and images instead of purely being used for classification.
Are predictive text keyboards on a phone OK (tiny LLMs)? How about translation engines like Google Translate?
Vision LLMs to help with wildlife camera trap analysis? How about to help with visual impairments navigate the world?
I suspect your problem isn't with "AI", it's with the way specific AI systems are being built and applied. I think we can have much more constructive conversations if we move beyond blanket labeling "AI" as the problem.
1. Here is the subset: any algorithm, which is learning based, trained on a large data set, and modifies or generates content.
2. I would argue that translation engines have their positives and negatives, but a lot of them are negative, because they lead to translators losing their jobs, and a loss in general for the magical qualities of language learning.
3. Predictive text: I think people should not be presented with possible next words, and think of them on their own, because that means they will be more thoughtful in their writing and less automatic. Also, with a higher barrier to writing something, they will probably write less and what they do write will be of greater significance.
4. I am against all LLMs, including wildlife camera trap analysis. There is an overabundance of hiding behind research when we really already know the problem fairly well. It's a fringe piece of conservation research anyway.
5. Visual impairments: one can always appeal to helping the disabled and impaired, but I think the tradeoff is not worth the technological enslavement.
6. My problem is categorically with AI, not with how it is applied, PRECISELY BECAUSE AI cannot be applied in an ethical way, since human beings en masse will inevitably have a sufficient number of bad actors to make the net effect always negative. It's human nature.
I wish your parent comment didn't get downvoted, because this is an important conversation point.
"PRECISELY BECAUSE AI cannot be applied in an ethical way, since human beings en masse will inevitably have a sufficient number of bad actors"
I think this is vibes based on bad headlines and no actual numbers (and tbf, founders/CEO's talking outta their a**). In my real-life experience the advantages of specifically generative AI far outweighs the disadvantages, by like a really large margin. I say this as someone academically trained on well modeled Dynamical systems (the opposite of Machine Learning). My team just lost. Badly.
Case-in-point: I work with language localization teams that have fully adopted LLM based translation services (our DeepL.com bills are huge), but we've only hired more translators and are processing more translations faster. It's just..not working out like we were told in the headlines. Doomsday Radiologist predictions [1], same thing.
> I think this (esp the sufficient number of bad actors) is vibes based on bad headlines and no actual numbers. In my real-life experience the advantages of specifically generative AI far outweighs the disadvantages, by like a really large margin.
We define bad actors in different ways. I also include people like tech workers, CEOs who program systems that take away large numbers of jobs. I already know people whose jobs were eroded based on AI.
In the real world, lots of people hate AI generated content. The advantages you speak of are only to those who are technically minded enough to gain greater material advantages from it, and we don't need the rich getting richer. The world doesn't need a bunch of techies getting richer from AI at the expense of people like translators, graphic designers, etc, losing their jobs.
And while you may have hired more translators, that is only temporary. Other places have fired them, and you will too once the machine becomes good enough. There will be a small bump of positive effects in the short term but the long term will be primarily bad, and it already is for many.
I think we'll have to wait and see here, because all the layoffs can be easily attributed to leadership making crappy over-hiring decisions over COVID and now not being able to admit to that and giving hand-wavy answers over "I'm firing people because AI" to drive different headline narratives (see: founders/CEO's talking outta their a**).
It may also be the narrative fed to actual employees, saying "You're losing your job because AI" is an easy way to direct anger away from your bad business decisions. If a business is shrinking, it's shrinking, AI was inconsequential. If a business is growing AI can only help. Whether it's growing or shrinking doesn't depend on AI, it depends on the market and leadership decision-making.
You and I both know none of this generative AI is good enough unsupervised (and realistically, with deep human edits). But they're still massive productivity boosts which have always been huge economic boosts to the middle-class.
Do I wish this tech could also be applied to real middle-class shortages (housing, supply-chain etc.), sure. And I think it will come.
Just to add one final point: I included modification as well as generation of content, since I also want to exclude technologies that simply improve upon existing content in some way that is very close to generative but may not be considered so. For example: audio improvent like echo removal, ML noise removal, which I have already shown to interpolate.
I think AI classification and stuff like classification is probably okay but of course with that, as with all technologies, we should be cautious of how we use it as it can be used also in facial recognition, which in turn can be used to create a stronger police state.
> I would argue that there are very few benefits of AI, if any at all. What it actually does is create a prisoner's dilemma situation where some use it to become more efficient only because it makes them faster and then others do the same to keep up. But I think everyone would be FAR better off without AI.
Personally, my life has significantly improved in meaningful ways with AI. Apart from the obvious work benefits (I'm shipping code ~10x faster than pre-AI), LLMs act as my personal nutritionist, trainer, therapist, research assistant, executive assistant (triaging email, doing SEO-related work, researching purchases, etc.), and a much better/faster way to search for and synthesize information than my old method of using Google.
The benefits I've gotten are much more than conveniences and the only argument I can find that anyone else is worse off because of these benefits is that I don't hire junior developers anymore (at max I was working with 3 for a contracting job). At the same time, though, all of them are also using LLMs in similar ways for similar benefits (and working on their own projects) so I'd argue they're net much better off.
A few programmers being better off does not make an entire society better off. In fact, I'd argue that you shipping code 10x faster just means in the long run that consumerism is being accelerated at a similar rate because that is what most code is used for, eventually.
I spent much of my career working on open source software that helped other engineers ship code 10x faster. Should I feel bad about the impact my work there had on accelerating consumerism?
I don't know if you should feel bad or not, but even I know that I have a role to play in consumerism that I wish I didn't.
That doesn't necessitate feeling bad because the reaction to feel good or bad about something is a side effect of the sort of religious "good and evil" mentality that probably came about due to Christianity or something. But *regardless*, one should at least understand that because our world has reached a sufficient critical mass of complexity, even the things we do that we think are benign or helpful can have negative side effects.
I never claim that we should feel bad about that, but we should understand it and attempt to mitigate it nonetheless. And, where no mitigation is possible, we should also advocate for a better societal structure that will eventually, in years or decades, result in fewer deleterious side effects.
The TV show The Good Place actually dug into this quite a bit. One of the key themes explored in the show was the idea that there is no ethical consumption under capitalism, because eventually the things you consume can be tied back to some grossly unethical situation somewhere in the world.
i don't really understand this thought process. all technology has it's advantages and drawbacks and we are currently going through the hype and growing pains process.
you could just as well argue the internet, phones, tv, cars, all adhere to the exact same prisoner's dilemma situation you talk about. you could just as well use AI to rubber duck or ease your mental load than treat it like some rat-race to efficiency.
True, but it is meaningful to understand whether the "quantity" advantages - drawbacks decreases over time, which I believe it does.
And we should indeed apply the logic to other inventions: some are more worth using than others, whereas in today's society, we just use all of them due to the mechanisms of the prisoner's dilemma. The Amish, on the other hand, apply deliberation on whether to use certain technologies, which is a far better approach.
Rather myopic and crude take, in my opinion. Because if I bring out a net, it doesn't change the woods for others. If I introduce AI into society, it does change society for others, even those who don't want to use the tool. You have really no conception of subtlety or logic.
If someone says driving at 200mph is unsafe, then your argument is like saying "driving at any speed is unsafe". Fact is, you need to consider the magnitude and speed of the technology's power and movement, which you seem incapable of doing.
Nobody decides, but that doesn't mean we shouldn't discuss and figure out if there is an optimal point.
Edit: And I think you might dislike automobiles if you were one of the people living right next to a tyre factory in Brazil, which outputs an extremely disgusting rubber smell on an almost daily basis. Especially if you bought your house before the factory was built, and you don't drive much.
But you probably live in North America and don't give a darn about that.
I think this is pretty much how many Amish communities function. As for me, I prefer making decisions on how to use technology in my own life on my own.
Of course that makes sense. But for instance, with SOME technologies, I would prefer not to use them but still sort of have to because some of them become REQUIRED. For example: phones. I would prefer not to have a telephone at all as I hate them with a passion, but I still want a bank account. But that's difficult because my bank requires 2FA and it's very hard to get out of it.
So, while I agree in prinicple that it's nice to make decisions on one's own, I think it would be also nice to have the choice to avoid certain technologies that become difficult to avoid due to their entrenchment.
> everyone I met there is actually trying to do the right thing
making human beings obsolete is not the right thing. nobody in openAI is doing the right thing.
in another part of the post he says safety teams work primarily on making sure the models dont say anything racist as well as limiting helpful tips on building weapons of terror… and that AGI safety is basically not a focus. i dont think this company should be allowed to exist. they dont have ANY right to threaten the existence and wellbeing of me and my kids!
> As I see it, the path to AGI is a three-horse race right now: OpenAI, Anthropic, and Google. Each of these organizations are going to take a different path to get there based upon their DNA (consumer vs business vs rock-solid-infra + data).
I don't think this makes OpenAI special. It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.
Look at it this way: the flip side of "incredibly bottoms-up" from this article is that there are people who feel rudderless because there is no roadmap or a thing carved out for them to own. Similarly, the flip side of "strong bias to action" and "changes direction on a dime" is that everything is chaotic and there's no consistent vision from the executives.
This cracked me up a bit, though: "As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing" - yes! That's true at almost every company that ends up making morally questionable decisions! There's no Bond villain at the helm. It's good people rationalizing things. It goes like this: we're the good guys. If we were evil, we could be doing things so much worse than X! Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!
Given how vengeful Altman can reportedly be, this goes double for OpenAI. This guy even says they scour social media!
Whether subconsciously or not, one purpose of this post is probably to help this guy’s own personal network along; to try and put his weirdly short 14-month stint in the best possible light. I think it all makes him look like a mark, which is desirable for employers, so I guess it is working.
I guess that's why he's filthy rich.
But I had a son 14 months ago.
There was absolutely no way I was going to miss any of a critical part in my baby’s life in order to be in an office at 2am managing a bad deployment.
Maybe I gave up my chance at PPU or RSU riches. But I know I chose a different kind of wealth that can never be replaced.
> The Codex sprint was probably the hardest I've worked in nearly a decade. Most nights were up until 11 or midnight. Waking up to a newborn at 5:30 every morning. Heading to the office again at 7a. Working most weekends.
Couldn't be me. I do my work, then clock the fuck off, and I don't even have kids. I wasn't put upon this earth to write code or solve bugs, I just do that for the cash.
Either option is priceless :-)
I would not describe it as easy.
Then later they even have the balls to complain how kids these days are unruly, never acknowledging massive gaps in their own care.
Plus it certainly helps the kid with bonding, emotional stability and keeps the parent more in touch emotionally with their own kid(s).
My favorite is ‘I can’t understand why my kid didn’t turn into a responsible adult!’
Cue look back on what opportunities the parent put them in to learn and practice those skills, over the last 20 years.
Sums up western workforce attitude and why immigrants continue to crush them
except if you publicly speak in less than glowing terms their leaders
Not gonna lie, the entire article reads more like a puff piece than an honest reflection. Feels like something went down on Slack, some doors got slammed, and this article is just trying to keep them unlocked. Because no matter how rich you are in the Valley, if you're not on good terms with Sam, a lot of doors will close. He's the prodigy son of the Valley, adopted by Bill Gates and Peter Thiel, and secretly admired by Elon Musk. With Paul Graham's help, he spent 10 years building an army of followers by mentoring them and giving them money. Most of them are now millionaires with influence. And now, even the most powerful people in tech and politics need him. Jensen Huang needs his models to sell servers. Trump needs his expertise to upgrade defence systems. I saw him shaking hands with an Arab sheikh the other day. The kind of handshake that says: with your money and my ambition, we can rule the world.
You think every billionaire is gonna be unhinged like Musk calling the president a pedo on twitter?
His experience at OpenAI feels overly positive and saccharine, with a few shockingly naive comments that others have noted. I think there is obvious incentive. One reason for this is, he may be in burnout, but does not want to admit it. Another is, he is looking to the future: to keep options open for funding and connections if (when) he chooses to found again. He might be lonely and just want others in his life. Or to feel like he's working on something that "matters" in some way that his other company didn't.
I don't know at all what he's actually thinking. But the idea that he is resistant to incentives just because he has had a successful exit seems untrue. I know people who are as rich as he is, and they are not much different than me.
Also, keep in mind that people aren't the same. What seems hard to you might be easy to others, vice versa.
I don't know if this happens to anyone else, but the more I read about OpenAI, the more I like Meta. And I deleted Facebook years ago.
The fact that several commenters know the author personally goes some way to explain why the entire comment section seems to have missed the utterly unbalanced nature of the article.
I've always heard horror stories about Amazon, but when I speak to most people at, or from Amazon, they have great things to say. Some people are just optimists, too.
Every, and I mean every, technology company scours social media. Amazon has a team that monitors social media posts to make sure employees, their spouses, their friends don’t leak info, for example.
I worked for a few years at a company that made software for casinos, and this was absolutely not the case there. Casinos absolutely have fully shameless villains at the helm.
More seriously, everyone is the hero of their own story, no matter how obvious their failings are from the outside.
I’ve been burned by empathetically adopting someone’s worldview and only realizing later how messed up and self-serving it was.
"Yes, I agree there are some downsides to our product and there are some people suffering because of that - but no one is forcing them to buy from us, they're people with agency and free will, they can act as adults and choose not to buy. Now what is this talk about feedback loops and systemic effects? It's confusing, go away."
This category is where you'll also find most of the advertising business.
The self-righteous may be the source of the greatest evil by magnitude, but day-to-day, the indifferents make it up in volume.
At 8am every morning, the executives walk across the casino floor on their way to the board room, past the depressed people who have been there gambling by themselves the entire night, seeing their faces, then they go into a boardroom to strategize ways to get them those people to gamble even harder. They brag about it. It's absolute pure villainy.
We can have all sorts of interesting discussions about how to balance human independence with shared social costs, but it's not inherently "evil" to give consenting adults products and experiences they desire.
IMO, much more evil is caused by busybodies trying to tell other people what's good for them. See: The Drug War.
Even when the self-righteous are at their most dangerous, they have to be self-righteous and in power, e.g.:
- https://en.wikipedia.org/wiki/Caedite_eos._Novit_enim_Dominu....or:
- https://km.wikipedia.org/wiki/ប្រជាជនថ្មីI think you and your colleagues should sit back and take it easy, maybe have a few beers every lunchtime, install some video games on the company PCs, anything you can get away with. Don't get fired (because then you'll be replaced by keen new hires), just do the minimum acceptable and feel good about that karma you're accumulating as a brake on evil.
Do these people have even minimal self-awareness?
Much more common for OpenAI, because you lose all your vested equity if you talk negatively about OpenAI after leaving.
There is a reason why there was a cult-like behaviour on X amongst the employees in supporting to bringing back Sam as CEO when he was kicked out by the OpenAI board of directors at the time.
"OpenAI is nothing without it's people"
All of "AGI" (which actually was the lamborghinis, penthouses, villas and mansions for the employees) was all on the line and on hold if that equity went to 0 or would be denied selling their equity if they openly criticized OpenAI after they left.
I'd have sounded cult-like too under those conditions (but I also don't believe AGI is a thing, so would not have a countervailing cult belief system to weigh against that behavior).
Why not? I don't think we're anywhere close, but there are no physical limitations I can see that prevent AGI.
It's not impossible in the same way our current understanding indicates FTL travel or time travel is.
My main reason for thinking general intelligence is not a thing is similar to how Turing completeness is not a thing. You can conceptualize a Turing machine, but you can't actually build one for real. I think actual general intelligence would require an infinite brain.
That's actually a great point which I'd never heard before. I agree that it's very likely that us humans do not really have GI, but rather only the intelligence that evolved stochastically to better favour our existence and reproduction, with all its positive and negative spandrels[0]. We can call that human intelligence (HI).
However, even if our "general" intelligence is a mirage, surely what most people imagine when they talk about 'AGI' is actually AHI, as in an artificial intelligence that has the same characteristics as human intelligence that in their own hubris they believe is general. Or are you making a harder argument, that human intelligence may not actually have the ability to create AHI?
[0] https://en.wikipedia.org/wiki/Spandrel_(biology)
In this formulation, it’s pretty much as impossible as time travel, really.
I don't see how claiming that intelligence is multi-faceted makes AGI (the A is 'artificial' remember) impossible.
Even if _human_ intelligence requires eating yogurt for your gut biome, that doesn't preclude an artificial copy that's good enough.
Like, a dog is very intelligent, a dog can fetch and shake hands because of years of breeding, training, and maybe from having a certain gut biome. Boston Dynamics did not have to understand a single cell of the dog's stomach lining in order to make dog-robots perfectly capable of fetching and shaking hands.
I get that you're saying "yes, we've fully mapped the neurons of a fruit fly and can accurately simulate and predict how a fruit fly's brain's neurons will activate, and can create statistical analysis of fruit-fly behavior that lets us accurately predict their action for much cheaper even without the brain scan, but human brains are unique in a way where it is impossible to make any sort of simulation or prediction or facsimile that is 'good enough' because you also need to first take some bacteria from one of peter thiel's blood boys and shove it in the computer, and if we don't then we can't even begin to make a facsimile of intelligence". I just don't buy it.
The tender offer limitations still are, last I heard.
Sure, maybe OA can no longer cancel your vested equity for $0... but how valuable is (non-dividend-paying) equity you can't sell? (How do you even borrow against it, say?)
(It would be a pretty fake solution if equity cancellation was halted, but equity could still be frozen. Cancelled and frozen are de facto identical until the first dividend payment, which could take decades.)
OpenAI will certainly punish you for this and most likely make an example out of you, regardless of the outcome.
The goal is corporate punishment, not the rule of the law.
Considering the high stakes, money, and undoubtedly the ego involved, the writer might have acquired a few bruises along the way, or might have lost out on some political in fights (remember how they mentioned they built multiple Codex prototypes, it must've sucked to see some other people's version chosen instead of your own).
Another possible explanation is that the writer's just had enough - enough money to last a lifetime, just started a family, made his mark on the world, and was no longer compelled (or have been able to) keep up with methed-up fresh college grads.
Well it depends on people’s mindset. It’s like doing a hackathon and not winning. Most people still leave inspired by what they have seen other people building, and can’t wait to do it again.
…but of course not everybody likes to go to hackathons
That kind of ambition feels like the result of Bill Gates pushing Altman to the limit and Altman rising to the challenge. The famous "Gates demo" during the GPT‑2 days comes to mind.
Having said that, the entire article reads more like a puff piece than an honest reflection.
We're talking about Sam Altman here, right, the dude behind Worldcoin? A literal bond-villainesque biological data harvesting scheme?
I'd be more worried about the guy who tweeted “If this works, I’m treating myself to a volcano lair. It’s time.” and more recently wore a custom T-shirt that implies he's like Vito Corleone.
Or you could realize what those guys all have in common and be worried about the systems that enable them because the problem isn't a guy but a system enabling those guys to become everyone's problem.
I don't mind "Vito Corleone" joking about a volcano lair. I mind him interfering in my country's elections and politics. I shouldn't have to worry about the antics of a guy building rockets that explode and cars that can chop off your fingers because I live in a country that can protect me from those things becoming my problem, but because we have the same underlying systems I do have to worry about him because his political power is easily transferrable to any other country including mine.
This would still be true if it were a different guy. Heck, Thiel is landing contracts with his surveillance tech in my country despite the foreign politics of the US making it an obvious national and economic security risk and don't get me started on Bezos - there's plenty of "guys" already.
Not that you're wrong about the systems, just that if it was as easy as changing these systems because we can tell they're bad and allow corruption, the Enlightenment wouldn't have managed to mess up with both Smith and Marx.
> I returned early from my paternity leave to help participate in the Codex launch.
10 years from now, the significance of having participated in that launch will be ridiculously small (unless you tell yourself that it was a pivotal moment of your life, even if it objectively wasn't) versus those first weeks with your newborn will never come back. Kudos to your partner though.
The opposite is true: Most ex-employee stories are overly positive and avoid anything negative. They’re just not shared widely because they’re not interesting most of the time.
I was at a company that turned into the most toxic place I had ever worked due to a CEO who decided to randomly get involved with projects, yell at people, and even fire some people on the spot.
Yet a lot of people wrote glowing stories about their time at the company on blogs or LinkedIn because it was beneficial for their future job search.
> It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.
For the posts that make HN I rarely see it that way. The recent trend is for passionate employees who really wanted to make a company work to lament how sad it was that the company or department was failing.
Yeah I had to re-read the sentence.
The positive "Farewell" post is indeed the norm. Especially so from well known, top level people in a company.
I don't think people who work on products that spy on people, create addiction or worse are as naïve as you portrayed them.
The operative word is “trying”. You can “try” to do the right thing but find yourself restricted by various constraints. If an employee actually did the right thing (e.g. publish the weights of all their models, or shed light on how they were trained and on what), they get fired. If the CEO or similarly high-ranking exec actually did the right thing, the company would lose out on profits. So, rationalization is all they can do. “I'm trying to do the right thing, but.” “People don't see the big picture because they're not CEOs and don't understand the constraints.”
Usually the level 1 people are just motivated by power and money to an unhealthy degree. The worst are true believers in something. Even something seemingly mild.
FWIW, I have positive experiences about many of my former employers. Not all of them, but many of them.
Sure, but this bit really makes me wonder if I'd like to see what the writer is prepared to do to other people to get to his payday:
"Nabeel Quereshi has an amazing post called Reflections on Palantir, where he ruminates on what made Palantir special. I wanted to do the same for OpenAI"
> Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!
Those are the easy cases, and correspondingly, you don't see much of those - or at least few are paying attention to companies talking like that. This is distinct from saying "X is going to directly benefit the society, and we're merely charging for it as fair compensation of our efforts, much like a baker charges you for the bread" or variants of it.
This is much closer to what most tech companies try to argue, and the distinction seems to escape a lot of otherwise seemingly sharp people. In threads like this, I surprisingly often end up defending tech companies against such strawmen - because come on, if we want positive change, then making up a simpler but baseless problem, calling it out, and declaring victory, isn't helping to improve anything (but it sure does drive engagement on-line, making advertisers happy; a big part of why press does this too on a routine basis).
And yes, this applies to this specific case of OpenAI as well. They're not claiming "LLMs are going to indirectly benefit the society because we're going to get rich off them, and then use that money to fund lots of nice things". They're just saying, "here, look at ChatGPT, we believe you'll find it useful, and we want to keep doing R&D in this direction, because we think it'll directly benefit society". They may be wrong about it, or they may even knowingly lie about those benefits - but this is not trickle-down economics v2.0, SaaS edition.
I liked my jobs and bosses!
I mean, that's a leap. There could be a bond villain that sets up incentives such that people who rationalize the way they want is who gets promoted / their voice amplified. Just because individual workers generally seem like they're trying to do the best thing doesn't mean the organization is set up specifically and intentionally to make certain kinds of "shady" decisions.
This is a great insight. But if we think a bit deeper about why that happens, I land on because there is nobody forcing anyone to do the right thing. Our governments and laws are geared more towards preventing people from doing the wrong thing, which of course can only be identified once someone has done the wrong thing and we can see the consequences and prove that it was indeed the wrong thing. Sometimes we fail to even do that.
Some points that stood out to me:
- Progress is iterative and driven by a seemingly bottom up, meritocratic approach. Not a top down master plan. Essentially, good ideas can come from anywhere and leaders are promoted based on execution and quality of ideas, not political skill.
- People seem empowered to build things without asking permission there, which seems like it leads to multiple parallel projects with the promising ones gaining resources.
- People there have good intentions. Despite public criticism, they are genuinely trying to do the right thing and navigate the immense responsibility they hold.
- Product is deeply influenced by public sentiment, or more bluntly, the company "runs on twitter vibes."
- The sheer cost of GPUs changes everything. It is the single factor shaping financial and engineering priorities. The expense for computing power is so immense that it makes almost every other infrastructure cost a "rounding error."
- I liked the take of the path to AGI being framed as a three horse race between OpenAI (consumer product DNA), Anthropic (business/enterprise DNA), and Google (infrastructure/data DNA), with each organisation's unique culture shaping its approach to AGI.
Wouldn't want to forget Meta which also has consumer product DNA. They literally championed the act of making the consumer the product.
[0] https://techcrunch.com/2025/07/14/mark-zuckerberg-says-meta-...
Twitter is a one-way communication tool. I doubt they're using it to create a feedback loop with users, maybe just to analyse their sentiment after a release?
The entire article reads more like a puff piece than an honest reflection. Those of us who live outside the US are more sceptical, especially after everything revealed about OpenAI in the book Empire of AI.
https://en.wikipedia.org/wiki/Pascal%27s_wager#Analysis_with...
Which god should I believe in, though? There are so many.
And what if I pick the wrong god?
https://en.wikipedia.org/wiki/Pascal%27s_mugging
Not sure how you can say this so confidently. Many would argue they're already pretty close, at least on a short time horizon.
There is a decade + worth of implementation details and new techniques to invent before we have something functionally equivalent to Jarvis.
There was nothing hypothesized about next-token prediction and emergent properties (they didn't know scale would allow it to generalize for sure). What if it's true is part of LLMs story, there is a mystical element here.
Nobody ever hypothesized it before it happened? Hard to believe.
It is somewhat parallel to the story of Columbus looking for India but ending up in America.
Some would say it still hasn't (to an agreeable level).
But I'd go further: even abilities that do appear "emergent" often aren't that mysterious when you consider the training data. Take instruction following - it seems magical that models can suddenly follow instructions they weren't explicitly trained for, but modern LLMs are trained on massive instruction-following datasets (RLHF, constitutional AI, etc.). The model is literally predicting what it was trained on. Same with chain-of-thought reasoning - these models have seen millions of examples of step-by-step reasoning in their training data.
The real question isn't whether these abilities are "emergent" but whether we're measuring the right things and being honest about what our training data contains. A lot of seemingly surprising capabilities become much less surprising when you audit what was actually in the training corpus.
There's so much compression / time-dilation in the industry: large projects are pushed out and released in weeks; careers are made in months.
Worried about how sustainable this is for its people, given the risk of burnout.
But when I sink my teeth into something interesting and important (to me) for a few weeks’ or months’ nonstop sprint, I’d say no to anyone trying to rein me in, too!
Speaking only for myself, I can recognize those kinds of projects as they first start to make my mind twitch. I know ahead of time that I’ll have no gas left the tank by the end, and I plan accordingly.
Luckily I’ve found a community who relate to the world and each other that way too. Often those projects aren’t materially rewarding, but the few that are (combined with very modest material needs) sustain the others.
That just turns out to be the kind of person who likes to be around me, and I around them. It’s something I wish I had been more deliberate about cultivating earlier in my life, but not the sort of thing I regret.
In my case that’s a lot of artists/writers/hackers, a fair number of clergy, and people working in service to others. People quietly doing cool stuff in boring or difficult places… people whose all-out sprints result in ambiguity or failure at least as often as they do success. Very few rich people, very few who seek recognition.
The flip side is that neither I nor my social circles are all that good at consistency—but we all kind of expect and tolerate that about each other. And there’s lots of “normal” stuff I’m not part of, which I probably could have been if I had tried. I don’t know what that means to the business-minded people around here, but I imagine it includes things like corporate and nonprofit boards, attending sports events in stadia, whatever golf people do, retail politics, Society Clubs For Respectable People, “Summering,” owning rich people stuff like a house or a car—which is fine with me!
More than enough is too much :)
Being passionate about something and giving yourself to a project can be amazing, but you need to have the bandwidth to do it without the people you care about suffering because of that choice.
i.e. Silicon Valley "grind culture".
This is what ex-employees said in Empire of AI, and it's the reason Amodei and Kaplan left OpenAI to start Anthropic.
Obvious priorities there.
There are plenty worse than that. The storied dramatic fiction parent missing out on a kid's life is much better than what a lot of children have.
Yet, all kids grow up, and the greatest factor determining their overall well-being through life is socioeconomic status, not how many hours a father was present.
About the guy: I think if it’s just a one time thing it’s ok but the way he presents himself gives reason for doubt
It's debatable whether a parent always needs to "lead by example": for example, I've never played hockey, but I introduced my son to it, and he played for a while (until injuries made us reconsider and he stopped). For mental well-being, make sure to not display your worst emotions in front of your kids - they will definitely notice, and will probably carry it for the rest of their lives.
This guy is young. He can experience all that again, if it is that much of a failure, and he really wants to.
Sure, there are ethical issues here, but really, they can be offset by restitution, lets be honest.
people conflate the terms "burnout" and "overwork" because they seem semantically similar, but they are very different.
you can fix overwork with a vacation. burnout is a deeper existential wound.
my worst bout of burnout actually came in a cushy job where i was consistently underworked but felt no autonomy or sense of purpose for why we were doing the things we were doing.
Something about youth being wasted on young.
You can love what you do but if you do more of it than is sustainable because of external pressures then you will burn out. Enjoying your work is not a vaccine against burnout. I'd actually argue that people who love what they do are more likely to have trouble finding that balance. The person who hates what they do usually can't be motivated to do more than the minimum required of them.
Also this is one of a few examples I've read lately of "oh look at all this hard work I did", ignoring that they had a newborn and someone else actually did all of the hard work.
I don’t delight in anybody’s suffering or burnout. But I do feel relief when somebody is suffering from the pace or intensity, and alleviates their suffering by striking a more sustainable balance for them.
I feel like even people energized by efforts like that pay the piper: after such a period I for one “lay fallow”—tending to extended family and community, doing phone-it-in “day job” stuff, being in nature—for almost as long as the creative binge itself lasted.
I think there are a lot of people that love their craft but are in specific working conditions that lead to burnout, and all I was saying is that I don't think it means they love their craft any less.
Well given the amount of money OpenAI pays their engineers, this is what it comes with. It tells you that this is not a daycare or for coasters or for the faint of heart, especially at a startup at the epicenter of AI competition.
There is now a massive queue of lots of desperate 'software engineers' ready to kill for a job at OpenAI and will not tolerate the word "burnout" and might even work 24 hours to keep the job away from others.
For those who love what they do, the word "burnout" doesn't exist for them.
This sets off my red flags: companies that say they are meritocratic, flat etc., often have invisible structures that favor the majority. Valve Corp is a famous example for that where this leads to many problems, see https://www.pcgamer.com/valves-unusual-corporate-structure-c...
>It sounds like a wonderful place to work, free from hierarchy and bureaucracy. However, according to a new video by People Make Games (a channel dedicated to investigative game journalism created by Chris Bratt and Anni Sayers), Valve employees, both former and current, say it's resulted in a workplace two of them compared to The Lord of The Flies.
Could work with a bunch of similarly skilled people in a narrow niche
Note he was specifically on the team that was launching OpenAI's version of a coding agent, so I imagine the numbers before that product existed could be very different to the numbers after.
Lots of good info in the post, surprised he was able to share so much publicly. I would have kept most of the business process info secret.
Edit: NVM. That 78k pull requests is for all users of Codex, not all engineers of Codex.
- The company was a little over 1,000 people. One year later, it is over 3,000.
- Changes direction on a dime.
- Very secretive place.
With the added "everything is a rounding error compared to GPU cost" and "this creates a lot of strange-looking code because there are so many ways you can write Python".
Is not something that is going to last.
https://github.com/sponsors/tiangolo#sponsors
https://github.com/fastapi/fastapi?tab=readme-ov-file#sponso...
Considering all the people who led the different safety teams have left or been fired, Superalignment has been a total bust and the various accounts from other employees about the lack of support for safety work I find this statement incredibly out of touch and borderline intentionally misleading.
https://www.reddit.com/r/ControlProblem/comments/1iyb7ov/key...
https://www.vox.com/future-perfect/2024/5/17/24158403/openai...
https://fortune.com/2025/01/28/openai-researcher-steven-adle...
https://www.centeraipolicy.org/work/openai-safety-teams-depa...
Is that why they have a dozens of different models?
> Many leaders who were incredibly competent weren't very good at things like presenting at all-hands or political maneuvering.
I don't think the Sam/Board drama confirms this.
> The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement. Anybody in the world can jump onto ChatGPT and get an answer, even if they aren't logged in.
Did you thank your OpenAI overlords for letting you access their sacred latest models?
+-+-
This reads like an Ad for Open AI or an attempt by the author to court them again? I am not sure how anyone can take his words seriously.
I don't know what's the rationale for not hiring tech writers other than nobody suggesting it yet, which is sad. Great dev tools require great docs, and great docs require teams that own them and grow them as a product.
People look at it as a cost a and nothing else.
The comparison here should clearly be with the other frontier model providers: Anthropic, Google, and potentially Deepseek and xAI.
Comparing them gives the exact opposite conclusion - OpenAI is the only model provider that gates API access to their frontier models behind draconic identity verification (also, Worldcoin anyone?). Anthropic and Google do not do this.
OpenAI hides their model's CoT (inference-time compute, thinking). Anthropic to this day shows their CoT on all of their models.
Making it pretty obvious this is just someone patting themselves on the back and doing some marketing.
CloseAI.
Probably because Deepseek trained a student model off their frontier model.
When I go to AWS it looks similar except role assignments can't be scoped, so needs more duplication and maintenance. In that way Azure seems nicer. In everything else, it seems pretty equivalent.
But I see it catching flak occasionally on HN, so curious what others dislike.
This is a very interesting nugget, and if accurate this could become their Achilles heel.
Most top-of-their-field researchers are on top of their field because they really love it, and are willing to sink insane amount of hours into doing things they love.
If anything about OpenAI which should bothers people is how they fake to be blind to the consequences because of "the race". Leveraging the decision IF and WHAT should be done to the top heads only never worked well.
I doubt many people would say something contrary to this about their (former) colleagues, which means we should always take this with a (large) grain of salt.
Do I think (most) AT&T employees wanted to let the NSA spy on us? Probably not. Google engineers and ICE? Palantir and.. well idk i think everyone there knows what Palantir does.
That is literally how openAI gets data for fine-tuning it's models, by testing it on real users and letting them supply data and use cases. (tool calling, computer use, thinking, all of these were championed by people outside and they had the data)
Not that unusual nowadays. I'd wager every tech company founded in the last ~10 years works this way. And many of the older ones have moved off email as well.
Why go through all that? Instead what would have been a much better scenario is openai carefully assessing different approaches to agentic coding and releasing a more fully baked product with solid differentiation. Even Amazon just did that with Kiro
Maybe you’re thinking of the confusingly named Codex CLI?
That’s ok.
Just don’t complain about the cost of daycare, private school tuition, or your parents senior home/medical bills.
How does any of this relates to the amount of hours one works?
a research manager there coauthored this under-hyped book: https://engineeringideas.substack.com/p/review-of-why-greatn...
What i haven't seen much is the split between eng and research and how people within the company are thinking about AGI and the future, workforce, etc. Is it the usual SF wonderland or is there an OAI specific value alignment once someone is working there.
To quote Jonathan Nightingale from his famous thread on how Google sabotaged Mozilla [1]:
--- start quote ---
The question is not whether individual sidewalk labs people have pure motives. I know some of them, just like I know plenty on the Chrome team. They’re great people. But focus on the behaviour of the organism as a whole. At the macro level, google/alphabet is very intentional.
--- end quote ---
Replace that with OpenAI
[1] https://archive.is/2019.04.15-165942/https://twitter.com/joh...
CosmosDB is trustworthy? Everyone I know that used CosmosDB ended up rewriting their code because of throttling.
I know a few other teams when I was working elsewhere that had to migrate off Spanner due to throttling and random hiccups, though if Google uses it for zanzibar, they must not have that problem internally. Maybe all these companies increase throttling limits for first party use cases.
My current team uses dynamo, which has also given throttling issues, but generally only when they try to do things that it's not designed for (bulk updating a bunch of records in the same partition). Other than that, it seems reliable (incredibly consistent low latencies) and a bit cheaper than my experience with cosmos, though with fewer features.
They all seem to have their own pros and cons in my experience.
>...
>OpenAI is also a more serious place than you might expect, in part because the stakes feel really high. On the one hand, there's the goal of building AGI–which means there is a lot to get right.
I'm kind of surprised people are still drinking this AGI Koolaid
They likely won't wait even for that. Because that itself is still really far off.
Some of the details seem rather sensitive to me.
I'm not sure if the essay is going to stay up for long, given how "secretive" OpenAI is claimed to be.
This past week I canceled my $20 subscription to GPT, advocated my friends do the same (i got them hooked) and just will be using Gemini from now on. It can create AI maps instantly for road trip travel routes to planning creek tubing trips and more. GPT does not do maps and I was paying $20 for it while Gemini is free? Bye!
Further and more important this guy says in his blog he is happy to help with the destruction of our (white collar) society that will cause many MANY financial to emotional pain while he lives high off the hog? Impending 2030 depression .. 100 years later Im unfortunately betting!
Now AI could indeed help us cure disease but if the major are destitute while a few hundreds or so live high of the hog the benefits of AI are canceled out.
AI can definitely do the job ten people use to yet NOW its just one person typing into a text prompt to complete the tasks ten use.
Why are we here..sprinting towards this goal of destruction let China destory themselves!!!
This paragraph doesn't make any sense. If you read a lot of Zvi or LessWrong, the misaligned intelligence explosion is the safety risk you're thinking of! So readers "guesses" are actually right that OpenAI isn't really following Sam Altman's:
"Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could."[0]
[0] https://blog.samaltman.com/machine-intelligence-part-1
Sleeping on Keen Technology I see
As impressive as LLMs can be at one-shotting certain kinds of tasks, working in a sprawling production codebase like the one described with tight performance constraints, subtle interdependencies, cross-cutting architectural concerns, etc. still requires a human driving most of the time. LLMs help a lot for this kind of work, but the human is either carefully assimilating their output or carefully choosing spots where (with detailed prompts) they can generate usable code directly.
Again, just a guess, but this my impression of how experienced engineers (including myself) are using LLMs in big/nontrivial codebases, and I’ve seen no indication that engineering processes at the labs are much different from the wider industry.
as an early stage founder, i worry about the following a lot.
- changing directions fast when i lose conviction - things breaking in production - and about speed, or the lack of it
I learned to actually not worry about the first two.
But if OpenAI shipped Codex in 7 weeks, small startups have lost the speed advantage they had. Big reminder to figure out better ways to solve for speed.
An actual offering made to the public that can be paid for.
I wonder one year is enough time for programmers to understand codebase, let alone meaningfully contributing patches? But then we see that job hopping is increasing common, which results in the drop in product qualities. I wonder what values are the job hoppers adding to the company.
I appreciate where the author is coming from, but I would have just left this part out. If there is anything I've learned during my time in tech (ESPECIALLY in the Bay Area) it's that the people you didn't meet are absolutely angling to do the wrong thing(TM).
I also don't trust that people within the system can assess if what they're doing is good or not. I've talked with higher ups in fashion companies who genuinely believe their company is actually doing so much great work for the environment when they basically invented fast-fashion. I've felt it first hand personally how my mind slowly warped itself into believing that ad-tech isn't so bad for the world when I worked for an ad-tech company, and only after leaving did I realize how wrong I was.
It's weird to say, but some people simply are not tuned in to the long term ramifications of their work. The "fuck you I got mine" mentality is even at play in many outwardly appearing progressive communities where short term gain is a moral imperative above doing good.
Edit: And that's to say nothing of the very generous pay...
Where is this AGI that you've built then? The reason for the very existence of that term is an acknowledgement that what's hyped today as AI isn't actually what AI used to mean, but the hype cycle VC money depends on using the term AI, so a new term was invented to denote the thing the old term used to denote. Do we need yet another term because AGI is about to get burned the same way?
> and LLMs are easily the technological innovation of the decade.
Sorry, what? I'm sure it feels that way from some corners of that particular tech bubble, but my 73 year old mom's life is not impacted by LLMs at all - well, except for when she opens her facebook feed once a month and gets blasted with tons of fake BS. Really something to be proud of for us as an industry? A tech breakthrough of the last decade that might have literally saved her life were mRNA vaccines though, and I could likely come up with more examples if I thought about it for more than 3 seconds.
One thing I was interested to read but didn't find in your post is: does everyone believe in the vision that the leadership has shared publicly, e.g. [1]? Is there some skepticism that the current path leads to AGI, or has everyone drunk the Kool-Aid? If there is some dissent, how is it handled internally?
[1]: https://blog.samaltman.com/the-gentle-singularity
The hype around this tech strongly promotes the narrative that we're close to exponential growth, and that AGI is right around the corner. That pretty soon AI will be curing diseases, eradicating poverty, and powering humanoid robots. These scenarios are featured in the AI 2027 predictions.
I'm very skeptical of this based on my own experience with these tools, and rudimentary understanding of how they work. I'm frankly even opposed to labeling them as intelligent in the same sense that we think about human intelligence. There are certainly many potentially useful applications of this technology that are worth exploring, but the current ones are awfully underwhelming, and the hype to make them seem more than they are is exhausting. Not to mention that their biggest potential to further degrade public discourse and overwhelm all our communication channels with even more spam and disinformation is largely being ignored. AI companies love to talk about alignment and safety, yet these more immediate threats are never addressed.
Anyway, it's good to know that there are disagreements about the impact and timelines even inside OpenAI. It will be interesting to see how this plays out, if nothing else.
What definition of AGI is used at OpenAI?
My definition: AGI will be here when you can put it in a robot body in the real word and interact with it like you would a person. Ask it to drive your car or fold your laundry or make a mai tai and if it doesn’t know how to do that, you show it, and then it can.
https://openai.com/charter/
That makes me wonder what kinds of work aren’t economically valuable? Would that be services generally provided by government?
Imagine I built a robot dog that behaved just like a biological dog. It bonds with people, can be trained, shows emotion, communicates, likes to play, likes to work, solves problems, understands social cues, and is loyal. IMHO, that would qualify as an AGI even though it isn't writing essays or producing business plans.
I'm not sure it would, though. The "G" in AGI stands for "General", which a dog obviously can't showcase. The comparison must be done against humans, since the goal is to ultimately have the system perform human tasks.
The definition mentioned by tedsanders seems adequate to me. Most of the terms are fuzzy ("most", "outperform"), but limiting the criteria to economic value narrows it down to a measurable metric. Of course, this could be exploited by building a system that optimizes for financial gain over everything else, but this wouldn't be acceptable.
The actual definition is not that important, IMO. AGI, if it happens, won't appear suddenly from a singular event, but as a gradual process until it becomes widely accepted that we have reached it. The impact on society and our lives would be impossible to ignore at that point. The problem with this is that along the way there will be charlatans and grifters shouting from the rooftops that they've already cracked it, but this is nothing new.
That isn't obvious to me at all. If you don't like the dog analogy, lets try another: Does a human toddler qualify as having general intelligence?
I would say... yes. But with the strong caveat that when used within the context of AGI, the individual/system should be able to showcase that intelligence, and the results should be comparable to those of a neurotypical adult human. Both a dog and a toddler can show signs of intelligence when compared to individuals of their similar nature, but not to an adult human, which is the criteria for AGI.
This is why I don't think that a system that underperforms the average neurotypical adult human in "most" cognitive tasks would constitute AGI. It could certainly be considered a step in that direction, but not strictly AGI.
But again, I don't think that a strict definition of AGI is helpful or necessary. The impact of a system with such capabilities would be impossible to deny, so a clear definition doesn't really matter.
Most services provided by governments are economically valuable, as they provide infrastructure that allow individual actors to perform better, increasing collective economic output. (For e.g. high-expenditure infrastructure it could be quite easily argued though that they are not economically profitable.)
My question was whether everyone believes this vision that ASI is "close", and more broadly whether this path leads to AGI.
> If you didn't think that AGI were ever possible with LLMs, would you even be there to begin with?
People can have all sorts of reasons for working with a company. They might want to work on cutting-edge tech with smart people and infinite resources, for investment or prestige, but not necessarily buy into the overarching vision. I'm just wondering whether such a profile exists within OpenAI, and if so, how it is handled.
Discounting Chinese labs entirely for agi seems like a misstep though. I find it hard to believe there won’t be at least a couple contenders
Umm... I don't think Zuckerberg would agree with this statement.
The choice of name continues providing incredible amusement.
:-)
- no such thing as open ai as a decision unit, there are people at the top who decide + shareholder pressure
- a narcissist optimizes for himself and has no affective empathy: cruel, extremely selfish, image oriented, power hungry, liar etc.
- having no structure means having a hidden structure and hence real power of the few above, no accountability (narcissists love this too)
- framing this meritocracy is a positive framing, it is very easy to hide incompetence
- people want to do good, great, perfect naive and you can utilize this motivation to let them burn out and work for YOUR goals as a leader... another narcissist is great doing this to people and the richest man in the world per share price
all in all, having this kind of mindset is good for start ups or get the lucky punch, but AGI will be brought by Anthropic, Google and Ilya :) you will not have series of lucky punches, you have to have a direction
I think Sam Altman, a terrible narcissist, uses open AI to feel great and he has no strategy but using others for their own benefit, because narcissists dont care, they just care about their image and power... and that is why open AI goes down... bundling with Microsoft was a big red flag in the first place...
when i think of openAI, it is a bit like Netscape Navigator + Internet Explorer in one :)
Anthropic is like Safari + Brave
Google is like ... yeah :)
Ilya is like Opera/Vivaldi or so
Kind of interesting that folks aren't impressed by Azure's offering. I wonder if OpenAI is handicapped by that as well, compared to being on AWS or GCP.
Of course they are. People in orgs like that are passionate, they want to work on the tech cause LLMs are once-in-a-lifetime tech breakthrough. But they don’t realize enough that they’re working for bad people. Ultimately all of that tech is in the hands of Altman, and that guy hasn’t proven to be the saint he hopes to become.
it was however interesting to know that it isn't just Meta poaching OpenAI, but the reverse also happened.
Any gibberish on any company's behalf of "poaching" is nonsense regardless IMO.
Seems like an awful place to be.
this does not sound fun lol
I really can't see a person with at least minimal self-awareness talking their own work up this much. Give me a break dude. Plus, you haven't built AGI yet.
Can't believe there's so little critique of this post here. It's incredibly self-serving.
That's what he did at Segment even in the later stages.
https://news.ycombinator.com/newsguidelines.html
https://news.ycombinator.com/newsguidelines.html
> It's hard to go from being a founder of your own thing to an employee at a 3,000-person organization. Right now I'm craving a fresh start.
This is just wholly irrational for someone whose credentials indicate someone who is capable of applying critical thinking towards accomplishing their goals. People who operate at that level don't often act on impulse or suddenly realize they want to do something different. It seems much more likely he intentionally planned to give himself a year of vacation at OpenAI, which allows him to hedge a bit while taking a breather before jumping back into being a founder.
Is this essentially speculation? Yes. Is it cynical to assume he's acting cynically? Yes. Speculation on his true motives is necessary because otherwise we'll never get confirmation, short of him openly admitting to it (which is still fraught). We have to look at behaviors and actions and assess likelihoods from there.
My criticism is that that's a detail that is being obscured and instead other explanations for leaving are being presented (cynically IMO).
There might be some marginal drama to scrape up here if the post was negative about OpenAI (I'd still be complaining about trying to whip up drama where there isn't any), but it's kind of glowing about them.
It's more likely that he was there to see how OpenAI was run so he could learn and something similar on his own after.
... then the next paragraph
> As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing.
not if you're trying to replace therapists with chatbots, sorry
OpenAI is currently selected for the brightest and young excited minds, (and a lot of money).. bright, young (as in full of energy) and excited people will work well anywhere- esp if given a fair amount of autonomy.
Young people talking about how hard they worked is not a sign of a great corp culture, just a sign that they are in the super excited stage of their careers
In the long run who knows, I tend to view these companies as groups of like minded people and groups of people change and the dynamic changees overnight -so if they can sustain that culture sure, but who knows..
The cadence we're talking about isn't sustainable --- has never been sustained anywhere --- but if insane sprints like this (1) produce intrinsically rewarding outcomes and (2) punctuate otherwise-sane work conditions, they can work out fine for the people involved.
It's completely legit to say you'd never take a job where this could be an expectation.
I worked there for a few years and Calvin is definitely more of the grounded engineering guy. He would introduced him as an engineer and just get talking code. He would spend most of his time with the SRE/core team trying to tackle the hardest technical problem at the company.
Is it considered young / early career in this field?
The bottom line is that scaling requires money and the only way to get that in the private sector is to lure those with money with the temptation they can multiply their wealth.
Things could have been different in a world before financial engineers bankrupted the US (the crises of enron, salomon bros, 2008 mortgage debacle all added hundreds of billions to us debt as the govt bought the ‘too big to fail’ kool-aid and bailed out wall street by indenturing main street). Now 1/4 of our budget is simply interest payment on this debt. There is no room for govt spending on a moonshot like AI. This environment in 1960 would have killed Kennedy’s inspirational moonshot of going to the moon while it was still an idea in his head in his post coital bliss with Marilyn at his side.
Today our govt needs money just like all the other scrooge-infected players in the tower of debt that capitalism has built.
Ironically it seems china has a better chance now. It seems its release of deep seek and the full set of parameters is giving it a veneer of altruistic benevolence that is slightly more believable than what we see here in the west. China may win simply on thermodynamic grounds. Training and research in DL consumes terawatt hours and hundreds of thousands of chips. Not only are the US models on older architectures (10-100x more energy inefficient) but the ‘competition’ of multiple players in the US multiplies the energy requirements.
Would govt oversight have been a good thing? Imagine if General Motors, westinghouse, bell labs, and ford competed in 1940 each with their own manhattan project to develop nuclear weapons ? Would the proliferation of nuclear have resulted in human extinction by now?
Will AI’s contribution to global warming be just as toxic global thermonuclear war?
These are the questions that come to mind after Hao’s historic summary.
I would argue that there are very few benefits of AI, if any at all. What it actually does is create a prisoner's dilemma situation where some use it to become more efficient only because it makes them faster and then others do the same to keep up. But I think everyone would be FAR better off without AI.
What keeping AI free for everyone is akin to is keeping an addictive drug free for everyone so that it can be sold in larger quantities later.
One can argue that some technology is beneficial. A mosquito net made of plastic immediately improves one's comfort if out in the woods. But AI doesn't really offer any immediate TRUE improvement of life, only a bit more convenience in a world already saturated in it. It's past the point of diminishing returns for true life improvement and I think everyone deep down inside knows that, but is seduced by the nearly-magical quality of it because we are instinctually driven to seek out advantags and new information.
OK, if you're going to say things like this I'm going to insist you clarify which subset of "AI" you mean.
Presumably you're OK with the last few decades of machine learning algorithms for things like spam detection, search relevance etc.
I'll assume your problem is with the last few years of "generative AI" - a loose term for models that output text and images instead of purely being used for classification.
Are predictive text keyboards on a phone OK (tiny LLMs)? How about translation engines like Google Translate?
Vision LLMs to help with wildlife camera trap analysis? How about to help with visual impairments navigate the world?
I suspect your problem isn't with "AI", it's with the way specific AI systems are being built and applied. I think we can have much more constructive conversations if we move beyond blanket labeling "AI" as the problem.
2. I would argue that translation engines have their positives and negatives, but a lot of them are negative, because they lead to translators losing their jobs, and a loss in general for the magical qualities of language learning.
3. Predictive text: I think people should not be presented with possible next words, and think of them on their own, because that means they will be more thoughtful in their writing and less automatic. Also, with a higher barrier to writing something, they will probably write less and what they do write will be of greater significance.
4. I am against all LLMs, including wildlife camera trap analysis. There is an overabundance of hiding behind research when we really already know the problem fairly well. It's a fringe piece of conservation research anyway.
5. Visual impairments: one can always appeal to helping the disabled and impaired, but I think the tradeoff is not worth the technological enslavement.
6. My problem is categorically with AI, not with how it is applied, PRECISELY BECAUSE AI cannot be applied in an ethical way, since human beings en masse will inevitably have a sufficient number of bad actors to make the net effect always negative. It's human nature.
"PRECISELY BECAUSE AI cannot be applied in an ethical way, since human beings en masse will inevitably have a sufficient number of bad actors"
I think this is vibes based on bad headlines and no actual numbers (and tbf, founders/CEO's talking outta their a**). In my real-life experience the advantages of specifically generative AI far outweighs the disadvantages, by like a really large margin. I say this as someone academically trained on well modeled Dynamical systems (the opposite of Machine Learning). My team just lost. Badly.
Case-in-point: I work with language localization teams that have fully adopted LLM based translation services (our DeepL.com bills are huge), but we've only hired more translators and are processing more translations faster. It's just..not working out like we were told in the headlines. Doomsday Radiologist predictions [1], same thing.
[1]: https://www.nytimes.com/2025/05/14/technology/ai-jobs-radiol...
We define bad actors in different ways. I also include people like tech workers, CEOs who program systems that take away large numbers of jobs. I already know people whose jobs were eroded based on AI.
In the real world, lots of people hate AI generated content. The advantages you speak of are only to those who are technically minded enough to gain greater material advantages from it, and we don't need the rich getting richer. The world doesn't need a bunch of techies getting richer from AI at the expense of people like translators, graphic designers, etc, losing their jobs.
And while you may have hired more translators, that is only temporary. Other places have fired them, and you will too once the machine becomes good enough. There will be a small bump of positive effects in the short term but the long term will be primarily bad, and it already is for many.
It may also be the narrative fed to actual employees, saying "You're losing your job because AI" is an easy way to direct anger away from your bad business decisions. If a business is shrinking, it's shrinking, AI was inconsequential. If a business is growing AI can only help. Whether it's growing or shrinking doesn't depend on AI, it depends on the market and leadership decision-making.
You and I both know none of this generative AI is good enough unsupervised (and realistically, with deep human edits). But they're still massive productivity boosts which have always been huge economic boosts to the middle-class.
Do I wish this tech could also be applied to real middle-class shortages (housing, supply-chain etc.), sure. And I think it will come.
I think AI classification and stuff like classification is probably okay but of course with that, as with all technologies, we should be cautious of how we use it as it can be used also in facial recognition, which in turn can be used to create a stronger police state.
Personally, my life has significantly improved in meaningful ways with AI. Apart from the obvious work benefits (I'm shipping code ~10x faster than pre-AI), LLMs act as my personal nutritionist, trainer, therapist, research assistant, executive assistant (triaging email, doing SEO-related work, researching purchases, etc.), and a much better/faster way to search for and synthesize information than my old method of using Google.
The benefits I've gotten are much more than conveniences and the only argument I can find that anyone else is worse off because of these benefits is that I don't hire junior developers anymore (at max I was working with 3 for a contracting job). At the same time, though, all of them are also using LLMs in similar ways for similar benefits (and working on their own projects) so I'd argue they're net much better off.
That doesn't necessitate feeling bad because the reaction to feel good or bad about something is a side effect of the sort of religious "good and evil" mentality that probably came about due to Christianity or something. But *regardless*, one should at least understand that because our world has reached a sufficient critical mass of complexity, even the things we do that we think are benign or helpful can have negative side effects.
I never claim that we should feel bad about that, but we should understand it and attempt to mitigate it nonetheless. And, where no mitigation is possible, we should also advocate for a better societal structure that will eventually, in years or decades, result in fewer deleterious side effects.
I don't think the takeaway was meant to really be about capitalism but more generally the complexity of the system. That's just me though.
you could just as well argue the internet, phones, tv, cars, all adhere to the exact same prisoner's dilemma situation you talk about. you could just as well use AI to rubber duck or ease your mental load than treat it like some rat-race to efficiency.
And we should indeed apply the logic to other inventions: some are more worth using than others, whereas in today's society, we just use all of them due to the mechanisms of the prisoner's dilemma. The Amish, on the other hand, apply deliberation on whether to use certain technologies, which is a far better approach.
its impossible to get benefit from the woods if youve brought a bug net, and you should stay out rather than ruining the woods for everyone
If someone says driving at 200mph is unsafe, then your argument is like saying "driving at any speed is unsafe". Fact is, you need to consider the magnitude and speed of the technology's power and movement, which you seem incapable of doing.
Edit: And I think you might dislike automobiles if you were one of the people living right next to a tyre factory in Brazil, which outputs an extremely disgusting rubber smell on an almost daily basis. Especially if you bought your house before the factory was built, and you don't drive much.
But you probably live in North America and don't give a darn about that.
So, while I agree in prinicple that it's nice to make decisions on one's own, I think it would be also nice to have the choice to avoid certain technologies that become difficult to avoid due to their entrenchment.
making human beings obsolete is not the right thing. nobody in openAI is doing the right thing.
in another part of the post he says safety teams work primarily on making sure the models dont say anything racist as well as limiting helpful tips on building weapons of terror… and that AGI safety is basically not a focus. i dont think this company should be allowed to exist. they dont have ANY right to threaten the existence and wellbeing of me and my kids!
Grok be like. okey. :))