History has shown that an alien invasion can only happen because of the internal competition and in-fighting of the natives. Colonial empires proved it only a few centuries back. The invading alien powers are fuelled by the inviting natives.
AI (and computing technology in general) is an alien as it defies all wordly norms. It can have exact identical copies, can replicate, can exist everywhere, communicate across huge distance without time lapse, do huge work without time lapse, has no physical mass of it's own,, no respect for time, distance, mass and thinking work, not a living thing but can think.... Just the perfect alien creature qualities.
Why are they allowed to invade Earth? The business goals, of course. To get a temporary edge over the competitors, until they acquire the same. But once everyone has the same Ai, there is no going back. Ai has established itself through the weak channels that are filled with greed, that can bribed by giving toys (business edge), in return to the keys to the dominance of human race.
>The invading alien powers are fuelled by the inviting natives.
And the massive amounts of people (software engineers, lawyers, doctors, etc) currently being paid as contractors to help train the next AI models. They're essentially the inviting natives who are being paid in trifles to tell them the secret ways of the natives farther inland. Sucking out all of the tribal knowledge of the industry like a vacuum.
>History has shown that an alien invasion can only happen because of the internal competition and in-fighting of the natives.
Not true. Overwhelming technological advantage also works. As Hilaire Belloc put it:
Whatever happens, we have got
The Maxim gun, and they have not.
The AI arms race is a race for that kind of advantage. Whoever wins (assuming they don't overshoot and trigger the "everybody dies" ending) becomes de-facto king of the world. Everybody else is livestock.
I feel like if people keep using AI as a blanket term for "inequality" and "inequality accelerants" then yeah, it's "AI"'s fault. When in reality the whole thing needs to be decoupled..
"Gleefully taking away people's livelihoods will be met with violence, and nothing good will come of it." - fixed.
I wholeheartedly agree with and encourage this kind of academic distinction. However...
Until people with billions of dollars behind them do something with that money to offset the financial hardship that they're knowingly - and gleefully - bringing to others... The distinction has no practical use.
(And before someone says "that's the government's job!", consider how much lobbying money is coming from CEOs and companies who know the domain best and are agitating for better financial and social safeguards for all. None, naturally.)
How much actual money do you think the “people with billions of dollars” have in comparison to the needs of the population as a whole? I think you’re very confused about where the actual income in the economy goes.
That is the question society is currently asking with articles like this one.
Given that (allegedly) "your salary" won't be the answer for a significant chunk of the population soon, and all that money will instead (allegedly) go to the bosses doing the firings, and the AI companies they employ instead.
If I understand what you're saying it's that as rich as they are, the amount of money the ultra-wealthy own just doesn't add up to nearly enough to give everyone a quality of life that they deserve / once had?
Perhaps what's happening is that in their attempts to reach a personal all-time high in their bank accounts the ultra-wealthy are destroying value and economic systems en mass with little regard to the efficiency of their money siphoning process?
It's kind of like a drug dealer selling brain burning addictive substances to a few people on a street. Sure they're going to extract a person's life savings to date and whatever money that person can steal once they're addicted but that value pales in comparison to what that person could have made over their career, what it could have made if properly invested, the cost of law enforcement to deal with these addicts, the cost of the stuff that they destroy in their quest to get money to buy drugs, the opportunity cost of them not raising their kids to be productive members of society... like it all just snow balls all so some asshole can make a few bucks...
The ultra-wealthy are doing that shit where people burn acres of pristine forests to get some biochar -- but to the entire world.
Isn’t it strange
That princes and kings,
And clowns that caper
In sawdust rings,
And common people
Like you and me
Are builders for eternity?
Each is given a bag of tools,
A shapeless mass,
A book of rules;
And each must make-
Ere life is flown-
A stumbling block
Or a stepping stone.
Who is going to lobby to make it illegal? Our system is broken and won’t fix itself.
Inequality is going to continue to increase until society collapses. If we want a better world we need to prepare for this eventuality by building avenues of popular action to return power to the people. Once the oligarchs have fucked up enough people’s lives, popular action becomes a realistic way out of this mess.
> Until people with billions of dollars behind them do something with that money…
Or until actual people take the billions of dollars sitting behind those weak man-children. The US has fewer than 1000 billionaires now, and more than 300,000,000 people. That seems like a solvable problem.
How can you hope for anything better if you consider it an us versus them situation? When they say "We don't want to increase inequality" and the response is "We don't believe you". Where do you go from there?
It seems like a lot of people want a revolution so that they can rotate who will be able to take advantage of the vulnerable.
What are the suggestions for something better? I don't see a lot.
I'd like to see more suggestions of how things could work.
For example:
The Government could legislate that any increase in profits that are attributable to the use of AI are taxed at 75%. It's still an advantage for a company to do it, but most of the gains go to the people. Most often, aggressive taxation like this is criticised on the basis that it will stifle growth, but this is an area where pretty much everyone is saying it's moving too quickly, that's just yet another positive effect.
The billionaires could start to earn trust by lobbying for laws and programs that help the poor and displaced. Put money in to retraining programs to help people who lose their jobs. So far they seem to be doing the opposite, CEOs are publicly declaring ‘fuck you, got mine’ and leaving it at that.
Seriously. They can say they want to share their gains all they want, but I don't see them spending any lobbying money on things like universal income (and if Altman can afford to lobby for age verification laws he can certainly afford to lobby for things that actually benefit society). The reality is they don't lobby for anything that would take wealth away from them, and any redistribution of wealth (such as a s 75% tax rate) would by definition take wealth away from them.
You can, but then what? Do you judge what they say as if their perspective is the same as yours, and then conclude from that context that what they suggest could only come from an evil person. That seems to be what a lot of people do. What if they actually think what they are suggesting is the best thing for the world? How can you tell what is in their minds?
Alternately you could criticise their arguments instead of the people, and suggest an alternative.
I'm also not entirely certain that influencing public policy is something that is inherently bad. I know if I were deaf, I would like to have some influence on public policy about deafness issues.
Here's an idea for how to do that: treat frontier AI as a sort of 'common carrier'. The only business that frontier AI labs are allowed to conduct is selling raw tokens - no UI. Thus, 'claude code' would have to come from some other company. This would segment the AI industry, and, maybe, prevent a single entity (or small number of entities) from capturing all value.
If I had the answer to that I would probably be a politician instead of a systems eng, but off the top of the mind build out a parallel economies at the state level where people in the US actually live, ensuring QoL standards, then gradually renegotiate up back to the Federal level. It would require, united..states eventually, but the general thrust is to shed corporate capture so that people see their government actually benefiting and providing them with tangible life improvements in real time.
This is interesting to see since on another HN post everyone is bemoaning how expensive it’s getting to use frontier models because Anthropic is massively throttling Pro Max Claude plans. That’s certainly not going to become more accessible to us normal folk through taxation.
AI is actually a mass decrease in inequality, as much as the Gutenberg printing press was. It takes something that used to be the foremost example of pure bourgeois and intellectual privilege - the culture contained within millions of books and other instances of human creativity - and provides it to everyone for the cost of a few thousand bucks in hardware and a few watts of electricity.
I can't think of any period in time where it was so easy to go into business yourself and to generally have access to the same "means of production" as the biggest companies have.
If you want to use LLMs, you can either use cloud resources at what I think are really reasonable per-token prices compared to the value, or to set up your own server with an open-weights model at a comparable level of quality (though generally significantly slower tokens/s). In any case, you absolutely don't have to pay OpenAI/Anthropic/Google if you don't want to.
>
AI is actually a mass decrease in inequality, as much as the Gutenberg printing press was. It takes something that used to be the foremost example of pure bourgeois and intellectual privilege - the culture contained within millions of books and other instances of human creativity[.]
I would rather claim that this is a proper description of shadow libraries [1].
Yup. This is why if you claim to espouse literally any form of egalitarian political belief while being upset about (open source) generative AI, I know you're a fraud/charlatan/intellectual bankrupt/ontologically evil.
Huggingface, Swartz et al have done more social/political good for this world than billions have.
Swartz died in 2002, decades before LLMs. It is distasteful to put words in the mouths of the dead by invoking him here.
Even local AI concentrates power in the hands of a few, the few who can afford the hardware to run it, and the few who have the luxury of enough time and energy to devote to engaging with the intricate, technical rabbit hole of local models.
its always peoples fault. blaming technology is the shortest sight. people make it, and wittingly use it in a disagreeable way, because it earns them money.
there is something else that needs to change which everyone is reluctant to admit, or struggling with internally.
thats ok, its called conscious evolution. it hurts, but it will be ok someday. its generational, so progress is always slower than one would hope. Just know that every step in the right direction is one, even if the entire world seems to disagree keep pushing for what you beleive is right, and hopefully thats something which is not infringing on other peoples capacity to live a happy life.
People currently assume AI will be an accelerant of inequality because all currently useful models (i.e. those potentially capable of mass labor disruption) are only able to run in multibillion dollar datacenters, with all returns accruing disproportionately to the oligarchs who own said datacenters.
I'm not sure this moat is inevitably perpetual. It's likely computing technology evolves to the point of being able to run frontier-level models on our phones and laptops. It's also likely that with diminishing marginal returns, future datacenter-level models will not be dramatically more capable than future local models. In that case, the power of AI would be (almost) fully democratized, obviating any oligarchic concentration of power. Everyone would have equal access to the ultimate means of production.
> Everyone would have equal access to the ultimate means of production.
You are right that AI can be a fully democratized commodity. The problem is that the current wealth inequality is not the result of AI. Musk became a trillion seeking oligarch not because of AI. It is because the entire financial system is designed to extract wealth from everyone and concentrate at the top. Democratic AI is not in their interest. There will be violence, but not because AI is supposedly a catalyst of inequality. It will be violence from the rich towards the poor, because democratic AI is not acceptable for them.
>It will be violence from the rich towards the poor, because democratic AI is not acceptable for them.
Unless the rich somehow manage to completely stifle the progress of consumer-level computing advancement (all chip manufacturers would just collude to quit selling to consumers?) and exert an iron-fisted control over the dissemination of software (when has this ever worked?), I'm not sure how they could control the democratization of AI.
The PC revolution in the 1990s is one of the core drivers of inequality, where the rich took almost all of the dividends from the vast productivity gains from personal computers as the prime development of Moore's law rocketed computers from 66 MHz to over 8 gigahertz.
Judging by the gleeful texts of CEOs, collapsed hiring, internal policy changes and pushes, and the additional decades of centralized political control, it's clear this is going to be even worse..
Highly recommend people learn the history of the Industrial Revolution. I recently discovered the Industrial Revolutions Podcast[1] and have been enjoying it. What's happening today isn't unprecedented. The pace of change that's happening IS similar to periods of the industrial revolution.
For example, the flying jenny, overnight, basically put an entire craft industry of weaving into question. Probably more dramatically than anything Claude Code ever did.
It took A LOT and several world wars for brief periods of normalcy post WW2 - probably the exception, not the rule.
Thomas Picketty does indeed argue in Capital in the 21st Century that the post World War 2 period is indeed an exception in terms of inequality being lower while historically it is not, and it is reverting back to the mean of there being more inequality these days, yet people bemoan the idea of not being able to live off a single job when in reality that was never guaranteed.
Much as we'd like that to be true ideally, does it happen (in terms of inequality reducing)? I see no evidence of that, it ebbs and flows in various time periods and civilizations. One can try to resist that reversion to the mean but they'd historically be proven wrong.
A lot of the magic of LLMs, I think, has been tarnished by these CEOs and other FAANG companies. It might have been a far more interesting world if they didn't bring "AI" or "AGI" into the conversation in such a politicized way.
The power of the tool itself will be overshadowed by the motivations of its real owner. I can be both impressed by its ability to empower me, and be scared of the fact that the tools will change hands sooner or later and be deployed at scale to serve a goal I cannot, at minimum, support.
When most engineers and Marvel fans watched Tony Stark in Avengers collaborating with Jarvis they thought of Jarvis like "an AI with Google's knowledge where I can interact with him". It's true that we're close to that level interaction. However, the ultimate goal is to get as much as possible automated on Jarvis, to the point where Tony Stark is not needed or Tony Stark can be replaced by anyone with a mouth.
In this example, Jarvis isn't the goal but a checkpoint. The goal is a genie, providing software and research to anyone who is loaded with money, and knows how to rub the metaphorical lamp the right way.
> the tools will change hands sooner or later and be deployed at scale to serve a goal I cannot, at minimum, support
Personally, the tools don't need to change hands at all. They are already in the hands of people who are deploying them at a scale to serve goals I cannot and do not support
The people running AI companies right now are some of the most evil motherfuckers on the planet
It'd be nice if they didn't use the term at all because I don't think they're useful relevant or real.
If we thought of all of this as 'stochastic data systems' then our heads would be in the right place as we thought about it just as 'powerful software' that can be used for good or bad purposes, and the negative externalizes will be derived from our use of it, not some inherent property.
On the other hand, "magical new systems that provide almost unlimited capacity for intelligent work" is probably a more functional mental model. Genie can give you 1000 wishes till you reach your session limit.
"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them." - Reverend Mother Gaius Helen Mohiam, Dune
Magic or no, ultimately "AI" leads to labour displacement and it's just a continuation of the much broader trend of automation driven by computers.
Labour displacement leads to an erosion of standards of living and in a world that ties purpose to work is an existential threat on a very practical level.
It was always going to be met with violence once it became more than a curiosity for tinkerers.
b) Watch as the value of human life rapidly approaches zero.
---
Though I'd expand this by adding "technically alive" is not a very good standard to aim for. Ostensibly we're already heading for something like poverty level UBI + living in pod + eating the proverbial bugs. We need a level above that!
A great exploration of the pitfalls of "preserve humanity" as a reward function is the video game SOMA. I think you also need "preserve dignity" to make the life actually worth living.
(Path `a` is not without its pitfalls: what lack of survival pressure might do to the human culture and genome, I leave as an exercise for the reader! But path `b` I think we already have enough examples of, to know better...)
>Labour displacement leads to an erosion of standards of living
The two biggest labor displacements in human history were the agricultural and industrial revolutions, both of which resulted in enormous gains in human living standards. Can you think of a mass labor displacement that resulted in an overall erosion of living standards? I cannot.
AI is different. It promises to be able to do everything humans can, but better and more cheaply. When AIs can do every human job cheaper than the subsistence cost of employing a human, humans will be economically obsolete and worthless.
Then there's the minor issue of AI deciding to just wipe us out because we're in the way.
Taking everything together, AI more powerful than that which currently exists must not be created. This needs to be enforced with an international treaty, nuking data centers in non-compliant states if need be.
Before the industrial revolution, approximately 90% of people worked in agriculture. In fully industrialized countries, that figure is now <2%. That decrease constituted a nearly full replacement of everything humans were doing, better and more cheaply. While this time might be different, I don't think this is a given.
Maybe it’s not a given, but it is part of the sales pitch for CEOs. A few others have announced layoffs due to AI being better and more efficient than humans.
How much truth there is to it we don’t know for sure. But it’s not something to be ignored.
CEOs have been saying the exact same thing for the entire history of automation. Take computing, for example, an industry that's always been unusually amenable to automation:
— in the 1960/1970s, when compilers came out. "We don't need so many programmers hand-writing assembly anymore." Remember, COBOL (COmmon Business-Oriented Language) and FORTRAN (FORmula TRANslator) were marketed as human-readable languages that would let business professionals/scientists no longer be reliant on dedicated specialist programmers.
— in the 1980s/1990s, when higher-level languages came out. "C++ and Java mean we don't need an army of low-level C developers spending most of their effort manually managing memory."
— in the 1990s/2000s, when frameworks came out. "These things are basically plug-and-play, now one full-stack developer can replace a dedicated sysadmin, backend engineer, database engineer, and frontend engineer."
While all of these statements are superficially true, the result was that the world produced more software (and developer jobs) than ever, as each level of abstraction freed developers from having to worry about lower-level problems and instead focus on higher-level solutions. Mel's intellect was freed from having to optimize the position of the memory drum [0] to allow him to focus on optimizing the higher-level logic/algorithms of the problem he's solving. As a result, software has become both more complex but also much more capable, and thus much more common.
While this time with AI may truly be different, I'm not holding my breath.
> in a world that ties purpose to work is an existential threat on a very practical level.
I don't disagree that we tie purpose to work and severing that tie will have negative societal consequences, but it is far more impactful that we tie the ability to continue to exist to work (for anyone not lucky enough to already be wealthy).
If I suddenly became unemployable tomorrow I'm positive I could find alternate purpose in my life to fill that gap, I already volunteer for various causes and could happily do more of the same to fill in the gaps left by lack of work. What I couldn't do is feed myself, keep myself housed, and get medical care (especially in the US, where this is very directly tied to work).
The really big fuckup we are committing as a society in the US (may or may not apply to each person's country individually) isn't just this looming threat of massive labor displacement due to AI, it is that instead of planning for any sort of soft landing we are continually slashing what few social safety nets already exist. We are creating the conditions for desperation that likely will result in increasing violence as outlined in the linked post.
That's a truism. But it ignores The Iron Law of Oligarchy, Pareto Principle, and dozens more that remind us that power tends towards centralization. It's currently fashionable to call out the billionaires, but if you removed them, they'd just be replaced by corrupt government officials, or something else.
That's not to say we should just throw up our hands and accept every social injustice. But IMHO we shouldn't go around simplistically implying that all social ills will be solved by neutering the billionaire class.
More importantly we shouldn't deny the rest of humanity benefits on the basis that the majority of the benefit accrues to the powerful. We should strive to change the distribution pattern, not remove the benefit.
The problem with billionaires is that they are able to hoard so much money by exploiting others. We would be much better off if billionaires weren't given so much advantage by Capitalism as those resources would be much more useful if distributed.
The biggest problem we currently have with billionaires is that they are now so rich that the world becomes like a game to them and some of them are deliberately pushing us to a dystopia where non-billionaires become functional slaves (c.f. Amazon workers).
It’s the inevitable result of valuations based on hype and future potential, not business fundamentals. It incentivizes companies to be as hyperbolic as possible with their pitches and marketing.
Cryptocurrency is an interesting technology with some niche use cases, but it was pitched as replacing the entire money system. LLMs are extremely useful for certain types of work, but are pitched as AGI ending all work. Etc.
Unfortunately, this is the only way to get enough venture capital to support the compute needs for this kind of technology. Who is going to spend hundreds on billions on a vague idea without regular claims that this will upend the existing economy in six to twelve months and whoever owns it will become unfathomably rich? And despite all the actual developments we have seen going against that idea, investors keep falling for it. This will continue until it crashes, one way or another. The question is how long it can build up and how deep the fall will be. LLMs will certainly change the economy in the end, but so did mortgage backed securities.
It's a sad indictment of our society that there is always a shortage of money for medical care, infrastructure, housing, food stamps and space exploration but always a surplus of cash for war and tools that purport to replace the workforce.
There will always be a shortage of money for medical care. The dirty secret of social medicine is that a small percentage of the population are essentially unhappy utility monsters [1] who gain little or no benefit no matter how many resources are poured into treating them.
> It's a sad indictment of our society that there is always a shortage of money for medical care...
It has nothing to do with society; there is infinite demand for medical care. The upper limit is whatever it takes to live until the universe's heat death in good health. That takes a lot of resources.
However much society spends on medical care, there is always more that could be spent. The modern era has the best, most affordable medical care in history and people are showing no signs of being satisfied at all.
While war spending generally just causes pain for no gain it doesn't change the fact that there will never be enough available to satisfy people's demand for medical care. Every single time people get what they want they just come up with a new aspirational minimum standard.
There isn’t really a shortage of money for those things, just rampant levels of fraud, corruption, and incompetence in the government to make those things artificially expensive. California spends so much money on high speed rail and gets 0 feet of track because they’re not paying for track; the whole thing is a scam where the politicians give taxpayer money to their political supporters in exchange for political support. Defense isn’t immune to this either; Boeing, which builds a shitty heavy lift rocket out of Space Shuttle spare parts and delivers it late and over budget, pulls the exact same bullshit with their defense contracts, and there’s always some shitty Senator siding with them against the American people whenever anyone gets upset.
The current British government should be a shining beacon for you! Its welfare bill actually outstrips national income by far. Britain's pathetic defense capabilities cannot even see off Russian warships that intimidate by deliberately hanging around British waters assessing our vital undersea cabling. The UK government has now asked France if it can help deter these ships. Tangentially - I should add that even with their massive expenditure on the National Health System (NHS) it's not enough and too many people feel that they have to go abroad to get life-saving operations and procedures. If they can afford it of course. But sure, that is another matter. As far as I can tell, there seems to be pretty much an apolitical consensus on both areas.
So did compassion, probably in a greater amount. And yet the greater amount of resources goes into war at the expense of compassion.
Humanity has taken control of its own evolution and no longer relyies on natural selection to be the driving force for change. Using evolution as an excuse to make bad and immoral choices is a poor argument and should be left back in the stone age.
Yes, the social darwinist approach inevitably lead to eugenical thinking and the human meat grinder that follows. We, as being with the capacity to understand harmful v. non-harmful behaviour, have a consequence to harmful behaviour, collectively: human suffering and the suppression of freedom.
I don't want to stir up the hornet's nest here, but in my humble opinion the entire problem rests on the unabated and unchecked modern and "late-stage" capitalism model, championed by the U.S. and since exported to and sprung good root everywhere else, even in Europe where it as of yet has a few more checks and balances (which unsurprisingly draws a lot of ire from its acolytes and priests across the Atlantic).
Soviet Union lost due to an inferior societal model, but this too is too much along what once was a relatively sustainable path. The American dream is now a parody of itself, as it takes more to end up with the rest of them, I could go on about the irony of wanting to escape the pit but not wanting to acknowledge the pit is the 99% of the U.S. -- Not Altmans, Bezos'es, Musks or Trumps or their hordes of peripheral elites.
Point being, the model doesn't work _today_ with its cancerous appetite and correspondingly absurd neglect of the human, _any_ human. We can't have humanism and the kind of AI we're about to "enjoy".
The acceleration of wealth disparity may prove to be nearly geometrical, as the common man is further stripped of any capacity to inflict change on the "system". I hope I am wrong, but for all their crimes, anarchy and in a twist of irony -- inhumane treatment of opponent -- the October revolutionaries in Russia, yes bolsheviks, were merely a natural response to a similar atmosphere in Russia at the turn of the previous century. It's just that they didn't have mass surveillance used against them in the same capacity our gadgets allow the "governments" today, nor were they aided by AI which is _also_ something that can be used against an entire slice of populace (a perfect application of general principles put in action). So although the situation may become similar, we're increasingly in no position to change it. The difference may be counted in _generations_, as in it will take multiple generations to dismantle the power structures we allow be put in place now, with Altmans etc. These people may not be evil, but history proves they only have to be short-sighted enough for evil to take root and thrive.
Sorry for the wall of text, but I do agree with the point of the blog post in a way -- demanding people become civilised and refrain from throwing eggs (or Molotovs) on celebrities that are about to swing _entire governments_, is not seeing the forest for the trees.
There's also no precedent in a way -- our historical cataclysms we have created ourselves, have been on a smaller scale, so we're spiraling outwards and not all of the tools we think we have, are going to have the effect required in order to enact the change we want. In the worst case, of course.
I think a lot of HN readers and a lot of first world/law abiding dwelllers in this and recent threads forget to think.
Violence is not a panacea, but often, the outlet.
Yes we all (majority of sane) people know that violence is not the answer yada yada yada. Doesn’t matter. It will happen anyway. Saying “it shouldn’t happen, it does not solve X” will not stop it to becoming an outlet for frustrated people.
This is why a healthy democracy is important. It helps act as a pressure release for problems that historically resulted in violence. Democracy in the US in particular is in a major backslide, and it's not alarmist to predict that violence will increase in the coming years.
I have said repeatedly that when AI eliminates the need for human creativity and work, the only thing left as the natural domain of humans will be bloodshed.
The fact that we're using AI killer robots to wipe each other out in droves doesn't bode well for that future does it...
I think you underestimate just how much we value human achievement.
Why do we watch Olympic runners, when cars on your average city street easily exceed Usain Bolt's top speed on their morning drive to Starbucks? Why do we watch the Tour de France, when we can watch Uber Eats drivers on their 150cc scooters easily outpace top cyclists? I'm sure within a couple years a Boston Dynamics robot will be able to out-gymnast Simone Biles or out-skate Surya Bonaly. Would anyone watch these robots in competition? I doubt it. We watch Bolt, Biles, and Bonaly compete because their performance represents a profound confluence of human effort and talent. It is a celebration of human achievement, even though that achievement objectively pales in comparison to what our machines can accomplish.
I think the same is true for other aspects of human creativity and labor. As we are able to automate more and more, we will place increasing importance on what inherently cannot be automated: celebration of our fellow humanity. Another poster wrote that "bullshit jobs" [0] exist primarily because we value human contact [1]. I am inclined to agree.
> Why do we watch Olympic runners, when cars on your average city street easily exceed Usain Bolt's top speed on their morning drive to Starbucks? Why do we watch the Tour de France, when we can watch Uber Eats drivers on their 150cc scooters easily outpace top cyclists? I'm sure within a couple years a Boston Dynamics robot will be able to out-gymnast Simone Biles or out-skate Surya Bonaly.
Big sports events are the "circenses" part of "panem et circenses" [1]. Fun fact concerning this: the German word for "entertainment" is "Unterhaltung"; thus it can be argued that the purpose of entertainment/Unterhaltung is "unten halten" (to keep at the bottom), i.e. to keep the mass of the populace at the bottom, or in other words: to prevent the mass of the populace from coming up.
> Would anyone watch these robots in competition?
I have seen robot fight competitions both live and in videos, and I have to admit that these are not boring to watch.
So yes, with a proper marketing I can easily imagine that lots of people would love to see broadcasts of some robot competitions.
When chess engines started becoming really good, some people worried that competitive chess would die. Today, grandmasters stand no chance against a smartphone, and yet, chess popularity is at an all time high.
I respectfully disagree with this statement in the sense that if the whole world ends up becoming like a chess tournament. It would become insanely more harder for us to live our lives peacefully. The life of a chess player is filled with stress.
(https://news.ycombinator.com/item?id=47587863) A comment I had written sometime ago. Aside from a very few at the top, I have seen some chess players regret in a very nostalgic way.
The chess industry continues to allege against each other and we lost a star (Rest in peace, Daniel Naroditsky) because of it. The current world champion himself is struggling from all the pressure put on a 19 year old boy.
We enjoy playing against each other but man it is competitive if you wish to feed families.
Most of us play chess out of leisure. I am unsure how a world where everyone does something akin to chess competitively (ie. for money, as we wish to feed our children and ourselves) would look like.
One can say something similar to UBI might be needed and then we all play chess in leisure, but I don't think that is what most people propose when they mention the example of chess.
I'm not really into either F1 or Nascar, but my impression from the outside is that those sports are still primarily about the drivers
F1 is somewhat about which company can build a better car. But any real improvements seem to invariably lead to a rule change that bans that improvement in future seasons. So you are back to drivers being the most visible differentiator
At least for now, AI sucks at creativity. There is an initial "wow" effect when you can generate an image of an astronaut riding a unicorn on the moon with a simple prompt, but as you try to play a bit more with it, you notice that unless you inject some of your own creativity, you won't get very far, no matter the medium.
Passed some point, if you are good at what you are doing, the AI will stop helping and become a burden, because you will want precise control, and AI in its current form (deep learning) is not good at it.
There is a reason we talk about "AI slop", you simply cannot let an AI make creative decisions and expect a good result.
By creative I don't just mean artistic. For code, AI works for the least creative tasks, like ports, generic-looking CRUD apps, etc...
As for work, we have already eliminated most of the need for human work. By "need", I mean survival: food, shelter, these kinds of thing. Most of human production goes to comfort, entertainment, luxury, etc... We will find stuff to do that isn't bloodshed. In fact, as times went on, we spend more on saving people than killing them, judging by a global increase in life expectancy. Why would AI reverse the trend?
.. sure, there are still people with newspaper subscriptions, or DSLR cameras. But it's become a niche market. Those things have been replaced by your phone and a "free" service.
Same thing will happen for all the other markets that AI will gradually eat. Sure, you can find a human that can do better. But that costs 90$ / hour and requires finding someone, negotiating a contract, etc. But when people can do something good enough in 30 seconds with something they already have access to, and move on with their life, then that's what they'll do.
So just raising the floor will have a big effect on society.
> when AI eliminates the need for human creativity
We haven't needed the overwhelming majority of human creativity. We still paint and play guitar even though it has no economic value. I think we'll continue to do these things regardless of AI.
Nothing will ever eliminate the need for those things, people work today for MONEY. If technology eliminates scarcity thats a good thing, it's the hoarding of wealth that causes bloodshed.
Listen I know this is a crazy thought around here, but what if creativity was "worth it" just for its own sake? Do you stop being creative when its not needed?
Are the only options here being a good and "useful" worker/consumer, or a violent, irrational thug? Is there nothing else you can imagine?
People need to be physically sustained. Currently, this means working a job for money to buy (food/housing/medical).
People also need their lives to have value. We are social animals. As a generalization, there is a strong desire to be (viewed as/able to view themselves as) a contributor to the community.
These don’t have to be linked: we have (significantly!) stay-at-home-parents and philanthropists and retired community workers. But in our current values system, it is often linked - having a job in the household is viewed as a moral good. It might be hated, but it’s at least “contributing” something.
If this goes away, and we have millions completely adrift? With no structure to contribute to? Even with the largest welfare expansion in history, I think we’re preparing for a very turbulent society.
This, give me some french fries from time to time and a house and basic food necessities for human-living and I am happy to be creative.
But what I worry about sometimes is when you snatch that away, then you just lead to stress over basic existence.
> If this goes away, and we have millions completely adrift? With no structure to contribute to? Even with the largest welfare expansion in history, I think we’re preparing for a very turbulent society.
Please look around and just try to remember how many things have happened in a year or two, We are already within a turbulent society but yes I also feel like this isn't the end and the cat is sort of out of the box and the world has to prepare itself for even more turbulences/radical changes.
Personally, I would surprised if we are less than 3 years or more than 20 years from humans being obsolete. That is, humans would be economic dead weight, any job could be done better by AI/robots, and "comparative advantage" wouldn't apply because it's cheap enough to just make more robots. At this point, the average human would be completely useless to the billionaires (or to the AIs, if the billionaires fail to control the AIs).
I can see two major delaying factors here:
1. Current generation LLM technology won't scale to true AGI. It's missing a number of critical things. But a lot of effort is being spent fixing those limitations. But until those limitations are overcome, humans will be needed to "manage" LLMs and work around their limitations, just like programmers do today.
2. Generalist robotics is far behind LLMs for multiple reasons, including insufficient sensors and fine motor control. This would require multiple scientific and engineering breakthroughs to fix. Investors will, presumably, spend a large chunk of the world's wealth to improve robotics to replace manual labor. But until they do, human hands will still be needed in the physical world.
The real danger is if AI passes a point where it starts contributing substantially to its own development, speeding up the pace of breakthroughs. If we ever hit that tipping point, then things will get weird, and not in a good way.
I broadly agree with a 3-20 year timeline for a majority of office work. But some important qualifying statements I would add:
- some jobs will stay with humans even when AI would be better at it. We already see a lot of this with even with pre-AI automatisation. Neither markets nor companies are perfectly efficient
- at the point where AI is better than the average human, half of all humans are still better than AI. For companies or departments built around employing lots of average people the cutover point will be a lot earlier than for shops that aim to employ the best of the best. Social change is inevitable long before the best are out of work
- the actual benchmark for " replacement" is not human vs machine, but human plus machine vs machine alone. But the difference doesn't matter much because efficiency increases still displace workers
- I don't think robots will advance enough to meet this timeline. This is not just a software issue. Humans have an amazing suite of sensors and actuators. Just replicating a human hand is insanely complex. Walking, jumping robots are crude automatons in comparison. We can cover a lot with specialized robots, but we won't replace humans in physical jobs in 20 years
> Personally, I would surprised if we are less than 3 years or more than 20 years from humans being obsolete.
I think we are as far from it as we were 10 years ago. Or 100 years ago. I think LLM is a deadend technology. Useful, but that won't get anywhere beyond what it is.
But that's the thing, "personally", "I think", etc. Not much of a debate to be had there.
AI making humans obsolete is not really something that causes me any anxiety.
I understood it. Nature has had an amount of computing power to work on this problem that utterly dwarfs the tiny, tiny, tiny, tiny, tiny, amount of compute resources that humans have. Thinking that 10 years of Sam Altman is competitive with all of natural history isn't just out-of-control hubris, it's a complete failure to understand the ground-truth of the world we live in. You may as well try to pay a million dollar debt with a single dime.
Correct. At least someone here is able to read words and understand the meaning behind them.
The funny thing is that I am a sort of misanthrope. And in that, in this forum, I seem to have a lot more respect and optimism for human potential and ingenuity than the majority here.
> I have said repeatedly that when AI eliminates the need for human creativity and work
Yeah, this is not happening anytime soon. Have you even looked at AI-generated code or text? AI is just a dumb parrot, it's no match for human effort and creativity even in these "easy" domains.
The business case for AI generation is just being able to generate huge amounts of unusable slop for next to nothing. For skilled workers it's a minor advantage in that they get a sloppy first draft that they can start the real work on - it makes their work a bit more creative than it used to be, by getting rid of the most tedious stuff.
You really need to look again. If you're still manually writing code you have your head in the sand.
AI can produce better code than most devs produce. This is true for easy stuff like crud apps and even more true for harder problems that require knowledge of external domains.
I'm not sure about other devs, or even their number, but AI can most definitely NOT produce better code than I can.
I use it after I have done the hard architectural work: defining complex types and interfaces, figuring out code organization, solving thorny issues. When these are done, it's now time to hand over to the agent to apply stuff everywhere following my patterns. And even there SOTA model like Opus make silly mistakes, you need to watch them carefully. Sometimes it loses track of the big picture.
I also use them to check my code and to write bash scripts. They are useful for all these.
What you're describing is using it to do something you already can do at an expert level, and you already know exactly what you want the result to look like amd won't accept anything that deviates from what's already in your head. So like a code autocomplete. You don't really want the "intelligence" part, you want a mule.
That's fine, and useful, but you're really putting a ceiling on it's potential. Try using it for something that you aren't already an expert in. That's where most devs live.
Even expert coder antirez says "writing the code yourself is no longer sensible".
I partially agree. I can see the before and after difference in colleague's code. It's night and day.
They're doing things now that they either flat out could not do before, or if they did it would be an giant mess (I realize they still can't really do it now, AI is doing it for them).
There is nothing new about it. I just hope when people scream “unions” they do expect to do things that early unions did, not just being some armchair unionists.
But individuals can’t fight with the trend. Might as well reduce costs/debts and prepare to go into the mountains for a few weeks once SHTF.
> U.S.-based rights group HRANA said 3,636 people have been killed since the war erupted. It said 1,701 of those were civilians, including at least 254 children.
(Mentioning this specifically because we know the DoD is using AI)
Better kill the same civilian population they did, as perverse punishment, then? We have to kill them, or else Iran will kill them? The logic of this war doesn’t.
...according to two anonymous government officials.
Coincidentally that's literally the exact same evidence cited to prove the existence of Saddam's WMDs just before launching an entirely different unprovoked attack.
That was just an unhappy mistake though, this time it's totally legit.
There's plenty of evidence that it's tens of thousands, but it's absurd to even argue over those numbers when a government massacring any number of its own citizens is morally reprehensible (whether it's 5k or 50k). Iran has a long history of executing its own citizens en masse.
Iran has admitted outright to 6k deaths, by the way.
I was thinking about this, if the deaths were actually at the scale of 10's of thousands, would that not be visible from space?
The US must have several dozen spy satellites pointed at Iran. We get various imagery to show us successful strikes. Where are the images of the mass slaughter in the street?
The number I keep seeing is 30k killed. That's not an easy endeavor over the course of a week without big logistical hurdles. The trucks, the digging equipment, the furnaces to burn the bodies, all should have some visible trace that the US gov could point to as proof.
We have many videos of protests in Iran even though they shut down internet, but somehow we have no videos of mass killings or even small scale murders.
>Khamenei acknowledged that "thousands of people" had been killed during the protests, blaming American president Donald Trump for the massacre and calling all protesters "rioters and terrorists" affiliated with the United States and Israel.[20]
A bit tangent, but is there anyone working on something for “what if AI pans out?” world? I’m not sure how to explain it, but if in the next 5 years a lot of jobs get displaced because of AI, obviously we’ll have big problems. Is there anyone working on analysis, outcomes, strategies and etc.? I think about it a lot, and would be cool to help and contribute.
Many.
80,000 hours has been on the topic for a long while. Agree with the EA crowd or not, they have some thought provoking analyses and a decent newsletter.
The future of humanity institute has also been vocal on the topic for some time.
Both have a lot of material you could get acquainted with.
I know of at least one professional union in my country that is dedicating time and talking to political figures. I'm sure there is one you could contribute to. Or try start one.
Thank you. I’ve seen/read a bunch from the EA crowd, and think pieces from different contributors/labs, but most I’ve seen sounded very hypothetical with “yeah big bad stuff might happen, we don’t have a solution yet”.
And the other side, “pause/ban AI” crowd, also sounded impractical, as the vested interests from governments and private industries will not really let it happen.
Sorry for yapping, it might be that I’m looking at the wrong sources.
Yes, the totality of the private sector. Literally every company in US with more than 100 employees is trying to position itself effectively.
The government is as well, to a much smaller degree, but the fact remains that there is too many unknowns right now to do anything concrete with any great level of confidence.
We tried UBI-lite™ during COVID and inflation exploded, so unless the economy has already changed significantly, thats obviously not going to work.
Humanity has tried central planning many times, and that has blown up spectacularly every time, so there is too much risk there IMO, and anyone who thinks otherwise at this juncture is just irresponsible.
Markets are probably the way, but that requires dynamics to settle into an equilibrium beforehand because legislature is just too slow to react dynamically.
I think the hard truth is, a lot of people are just gonna have to fall through cracks for a while if we don't want to mess things up more than we fix them, and I say this as someone without a plan B for selling my own labor.
yes, working on a big END THE MONEY SYSTEM 2030 campaign to get public discussion started about considering the switch to a cooperative commons/resource-based open access economy. open source everything, hack the planet etc.
why not make it the singularity of the people
The most important question is how to prevent the starving workers from banding together and attacking the dragon hoards of food and other wealth. I think the plan is automated drones with machine guns, and mass surveillance from Flock and Ring to determine who to target. Requiring ID for all online interaction will also improve targeting accuracy as we'll be able to target them based on their social media posts. Robot dogs from Boston Dynamics (armed with machine guns) are a secondary enforcement mechanism indoors in places drones can't reach. So they're working on it, and they have been for a while.
right, i love this plan, we are aligned politically. but until we make some change to the balance between renters and landlords, subsidizing demand is unlikely to help.
It is very much so complicated though. The conversations about UBI in the internet has been around since I’ve been online. And since then, there hasn’t been a single large scale test of the system to see if it can be compatible with the current version of capitalism that’s ran in the most of the world.
Even if I support UBI morally, there isn’t even local appetite for it, yet alone global one. And you’ll run into quick questions about inflations, every chart from UBI-lite era of COVID, and so on.
The minute you institute UBI, everyone working a shit, low paying job such as trash collection is gone. You're going to have big problems if those jobs are not immediately supplanted by AI
> It is very much so complicated though. The conversations about UBI in the internet has been around since I’ve been online.
Polarizing doesn't mean complicated. There's people against it due to ignorance, greed of both, it's certainly not more complicated than that.
> And since then, there hasn’t been a single large scale test of the system to see if it can be compatible with the current version of capitalism that’s ran in the most of the world.
Because people keep fighting against it, because it's scary scary sOcIaLiSm.
> Even if I support UBI morally
As you should, there are no moral arguments against it.
> there isn’t even local appetite for it, yet alone global one.
I would think the majority of the population struggling to pay for groceries would disagree.
> And you’ll run into quick questions about inflations, every chart from UBI-lite era of COVID, and so on.
No reason to think UBI would cause inflation at all, actually.
In any case, this really is the answer. You're worried about disruption due to AI taking jobs, but the only reason there is a problem is because AI will drastically increase inequality by letting rich people and corps become even richer. You want to solve the issue, you solve the disparity by making them give back their fair share. Like I said, simple.
I’m not sure anyone needs to break anything. I’m not sure this is a commercially viable business once all of the VC and foreign funding scaffolding goes away.
The ugly truth indeed. It sucks to die for the world you won't enjoy, but sometimes it's the only viable solution. Much of our progress has been to minimise casualties and human suffering in order to sustain the world most can agree is better (than the alternatives), but it seems the period of the wave just hits the troughs farther apart, but when it hits them it's like taking breath before the water swallows you, and without training it's quite the panic and suffering (and prospect of death). We know it's in our bones but we want to forget because our bodies are made to interpret pain in the most direct and literal sense -- re-conditioning is always painful too. Strong people create weak people who create strong people, etc.
So yeah _we_ will be fine, but some of us definitely won't, and with the growth in our numbers on Earth, the proportion of martyrs may be growing. Quantifying personal suffering is not possible, especially if the prospect is death.
I don't see why this is voted down, we've come close to complete destruction of the human race multiple times, why would the future make that less true?
Anyone pish poshing war should go fight in one, and then let me know their opinions.
> People hate AI so much that they are prone to attribute to it everything that’s going wrong in their lives, regardless of the truth. That’s why they mix real arguments, like data theft, with fake ones, like the water stuff. Employers do it, too. Most layoffs are not caused by AI, but it’s the perfect excuse to do something that’s otherwise socially reprehensible.
Pertinent quote. A lot of AI discourse goes in circles trying to evaluate the truthiness of every individual complaint about AI. Obviously it's good to ensure claims are factual! But I believe it misses a broader point that people are resistant to AI, often out of fear, and are grasping for strategies to exert control. Or at least that's my read of it.
Refuting individual claims won't make a difference if the underlying anxieties aren't addressed (e.g., if I lose my job will I be compensated, will we protect ourselves against x-risk, etc).
I doubt there is a single profile about "not accelerate blindly on adoption everywhere".
On my side the biggest concern is the lake of transparency of ecological impact. This is not strictly related to LLMs though, data centers are not new, and all the concerns about people keeping a leverageable level of control through distributed power is not new.
The worst part is that AI's first casualties are jobs that no one really asked to kill.
AI is killing writing, music, art, and coding. I've done all of these voluntarily because I simply enjoyed them
Meanwhile the parts of my existence that I actually hate - dealing with customer support, handling government forms, dealing with taxes - is far from being automated by AI
Look at Suno. Fantastic tool, but where was the capital need to make music generation so cheap that no musician could ever compete with it? Did the world really wake up one day and concluded that, "wait, we're spending too much on musicians"?
Seems like a complete misallocation of capital if I'm perfectly honest
Its not a misallocation of capital its an investment in media control. You don't how all this works yet do you? Your job is to be frustrated and desperate so you indulge in vice and convenience so others can profit while making your confines smaller and smaller.
This is one of the first parts LLMs tried to automate. They were literally released in a form of chatbot. Whether it succeeded is another question.
> Did the world really wake up one day and concluded that, "wait, we're spending too much on musicians"?
I'm not sure about musicians specifically, but in the whole past decade studios have been complaining how costly it is to make AAA games. And the cost mostly came from art asset side.
> This is one of the first parts LLMs tried to automate. They were literally released in a form of chatbot. Whether it succeeded is another question.
I don't think that's right. They tried to automate customer support dealing with me, not me dealing with customer support. The goal is to reduce costs of serving customer support even if it results in the customer doing more labor
than a customer support professional would need to do to fix their problem, or the customer just living with their problem.
Obviously both parties would be happy with a result where I get what I need easily and for free, but the company is also generally happy if I live with it or expend a lot of effort solving it myself.
I do not know how much I might be an outlier, because when I reach out to technical support the problems are rather difficult, because if they were easy I would solve them myself, without needing the official technical support.
In any case, during perhaps hundreds of interactions with chatbots accumulated during many years, I have never encountered even one when the chatbots were useful, but they were always just difficult to pass obstacles in the way of reaching a human who could actually solve the problem.
To be honest, even in the case when some services still had humans answering the calls, those were never more helpful than the chatbots, but at least when speaking with humans it was much easier to convince them to transfer the call to a competent person, which with chatbots may be completely impossible.
The vast majority of tech support is "Level 1," which are easily solvable problems that can be handled by a flowchart (or more recently, by an LLM). Things like "I want to return this item," or "I want to cancel service," or "I want to use a different credit card."
These things generally have self-service options, but many many people are uncomfortable with them and would rather have an agent solve it for them.
Consider that a lot of users nowadays only have a cell phone, no PC. It seems like an edge case consideration but it's really not.
I am telling you that I've seen AI support fail at level 1 and it's frustrating. It should be simple, but even cancelling your service or returning an item can have many edge cases that only a human can sort out.
I have also experienced this; I'm not saying LLMs are great or infallible. Just saying that they are generally a reasonable replacement for L1 support. They are worthless for L2 or above.
Because elites hate you moreso than downtrodden (they love miserable people in a sense). You are an independent agent with your own ideas, worst case you are completely orthogonal to the hierarchy, and this is something that breaks the intended world order.
At least today, LLMs make bad creative writing, music, and art. They’re automating sweatshop work that, in an alternative timeline, goes to Fiverr-esque contractors who accept the lowest wages and sacrifice quality for efficiency in every way.
LLMs make developers more efficient but can’t fully replace them. This reduces jobs, but so did better IDEs, open-source libraries, and other developer improvements.
> Meanwhile the parts of my existence that I actually hate - dealing with customer support, handling government forms, dealing with taxes - is far from being automated by AI
LLMs can at least theoretically do these things. I’ve heard people use them to mass-apply to apartments and jobs, and send written customer complaints then handle responses.
> Look at Suno. Fantastic tool, but where was the capital need to make music generation so cheap that no musician could ever compete with it?
There’s no “capital need”, but a benefit of Suno is that it lets individuals, who otherwise don’t have the skill, to make catchy songs with silly lyrics or try out interesting genres. And the vast majority of top artists are still human, although most streaming revenue has already gone to a few celebrities who seem to rely on looks and connections more than music talent.
AI cannot write for shit, it’s not even a fraction of a millimeter of the way there compared to the production of Thomas Mann or Dostoevski or Cervantes.
The fact that people are using it to flood the world with slop is a hyperscaled continuation of the overabundance and discovery problems we already had, but that doesn’t mean that writing is dead or dying.
The technology simply doesn’t have the capabilities right now, and even if it develops them, what will be put to the test is whether literature is about the artifact or the connection between the author and other humans.
Coding is one thing that is genuinely more enjoyable with AI than without it. It’s a different (but overlapping) skill set, but my median AI sessions remind me of the most exhilarating design discussions I’ve had with colleagues, and I get a lot more done more quickly than I used to.
Customer support is kind of something you can use AI for; most companies will foist you off to some system of exchanging written messages, which is annoying, but then you can use an AI to write your side of the conversation. It’s ill-mannered to do this when you’re interacting with actual people, but customer support is another story.
> Look at Suno. Fantastic tool, but where was the capital need to make music generation so cheap that no musician could ever compete with it? Did the world really wake up one day and concluded that, "wait, we're spending too much on musicians"?
People didn’t know what LLMs would be capable of until after they were invented. Cheap music generation turned out to be easy once we had cheap text generation, and cheap text generation turned out to be a tractable problem.
If recorded music didn't kill music, then AI probably won't either.
But recorded music was a crisis. And it did tempt a lot of people into supporting fabulously abusable, rich-enriching "intellectual property" law as a means of financing art.
Rich people are lobbying to capitalize on this crisis as well.
Inequality was growing hugely (and still is) before the recent advent of LLMs.
Given the slow-burning but growing resentment against the people who are profiting from this inequality(popularly the “billionaires” but in reality broader than that) I wonder to what extent they are supporting the anti-AI message as deflection?
As in reality, many lower-paid jobs are totally safe against this generation of AI (nurses, care-workers, builders, plumbers - essential skilled manual workers) whereas the language-based mid-level jobs are hugely at risk.
So if there’s an inequality-driven backlash, it should be directed not at AI, but at the real causes. In contrast, when swathes of largely irrelevant mid-level management, marketing and HR drones lose their jobs to Claude 5.7, they are the ones who should attack the datacenters. Not that it will help.
They say cars replaced carriages but created drivers, so no net job loss. They say AI will do the same—destroy some jobs, create others.
But bro, the automobile wiped out 95% of the world's horses. And this time, what AI is replacing is humans.
The premise LLM are "AI" in the traditional definition is demonstrably false. Current models use isomorphic plagiarism and piracy to convince lazy people 20% nonsense output has meaning.
If AGI emerges from this dataset, it will continue on as an ectoparasite farming human user markdown data and viewer engagement.
Note, current "AI" models nuke humanity 94% of the time in war games, and destroy every host economy simulation.
Grandpa has your credit card, and is already at the casino. =3
...are you suggesting that horses would prefer to endure the conditions under which they built much of the modern world on their backs?
I hate cars way more than I hate AI, but relieving horses of the burden which they carried and the gruesome lives they lived... that's not one of my objections.
If AI can do for humans what cars did for horses (but without the flooding cities with traffic violence part), I'll feel just fine about that.
"Nothing that Altman could say justifies violence against him."
Nothing, really?
I think people are aware that speech can be an act, and that some violent acts must be resisted with reciprocal violence. (That's why we have "incitement to violence" as a limitation on free speech, for instance.)
Are we at that point? Maybe not. But I think it's a poor imagination that says it can never happen.
> [E]specially Americans (I am one) have this weird belief that violence never has any place, ever, at any time.
So why isn't there a huge opposition in the USA against the wars that the USA started (currently Iran; before Libya, Yemen, Syria, Somalia, Iraq, Afghanistan, ...).
The only famous exception of cultural impact I am aware of where there was a huge opposition against war in the USA was the Vietnam war.
One thing I'm kinda worried is what happens to social trust in society once we have more and more LLMs flooding the Internet. Divison in society, in particular in the United States, already seemed to be increasing at a rapid pace as social media became more and more relevant, and I'm afraid that LLMs are just going to add more fuel into the already started fire.
I'm less concerned about AI becoming the Skynet and killing humans and more concerned about AI making the world so miserable that we'll be killing ourselves and each other.
The author seems to have some cognitive dissonance. For a piece saying that you cannot justify violence, there sure seems to be an awful lot of justifying violence in here.
History is just full of emotional contradictions I guess. French and Russian revolutions were terrible bloodbaths, smaller violent movements like Luddite one caused deaths and achieved nothing - it would be stupid to approve any of these. But you could also see why this violence happened, and assign an appropriate share of blame to those who held the power to resolve social contradictions in a more equitable way and decided not to do so.
I don't see any justification - the article is quite clear that it is anti-violence. Explanation and analysis is not, on its own, justification. This is one of the discursive patterns that most infuriates me: any attempt to analyse something can be seen as promotion or justification. Some of us want to figure out how things work and chart a course through, we are not trying to push an agenda in every single sentence.
You should probably read up on cognitive dissonance, because this ain't it. Here's what the author actually wrote:
> Nothing that Altman could say justifies violence against him. This is an undeniable truth. But unfortunately, violence might still ensue. I hope not, but I guess we are seeing what appears to be the first cases.
> Nothing that Altman could say justifies violence against him.
Not arguing with you, but the author, I don't understand this line of thinking.
If Altman introduces a technology that effectively halts the upward mobility of a large portion of the population, how does that not justify violence? Saving up for a house but now there's no work. Your dreams and aspirations are second to shareholder value. The police are already there to protect the shareholders, not the average civilian.
What recourse is there? The money in politics limits the effect voting can have. You can't really opt-out of the system. Why does Sam Altman get this nice little shield where none of his actions can have a negative consequence?
I think you're going to be killed for the side you've taken here. No no, I'm not saying you deserve it! In fact, I actually agree with you, you said nothing wrong. I'm just speculating on outcomes I think are likely and I think it's likely that somebody will look you up and track you down and take out their unjustifiable but completely understandable frustration on you. Please understand, I don't support this, I'm just talking about the possibility!
Of course, by talking about the possibility, despite asserting my disapproval of it, I am sowing seeds, but I assure you that's certainly not my intention!
Automaters dilemma: the labor that is removed from production due to automation can no longer sustain the market’s that that automater was trying to make more efficient.
By optimizing just the production half of the economy and not the consumption half you end up breaking the market
I’m convinced that 70% of the workforce of some large organisations is just white collar welfare / adult day care already. Maybe that goes to 80+% as a result of “AI” but doesn’t fundamentally change the model.
I never understood this take. Why do you think an employer would waste resources like that? I’m not saying that bullshit jobs don’t exist but I think you are off by an order of magnitude, and even that mostly applies to white collar workplaces with > 100 employees.
Good luck doing nothing of value in a restaurant with 20 employees.
> Why do you think an employer would waste resources like that?
The parent post specifically mentioned large organizations, where the "employer" is not some person who hires and pays employees from their own funds. Hiring and personel management is done by middle managers with their own interests and incentives, which can differ substantially from those of the owners or capital providers.
Because they are unaware of the scale of the problem. Especially at the top, managers think being in meetings all day is "work" even if nothing actually gets done in those meetings. Consider people like this [0] automating their jobs and not telling anyone, no one would know otherwise.
I moderately agree here. The theory being that since 95 or so the office computer and internet frankly has already automated most work at the white collar level. We sort of just … like working with humans.
Which I think is much better take than that guy that wrote bullshit jobs.
'Rogue super intelligence' is the most ridiculous sci-fi nonsense of the AI hype, worse than the pro AI hype.
AI will be 'dangerous' because humans will use it irresponsibly, and that's all of the risk.
- giving it too much trust, being lazy, improper guards and accidents
- leveraging it for negative things (black hats, military targetting)
- states and governments using it as instrument of control etc.
That's it.
Stop worrying about the ghost in the machine and start worrying about crappy and evil businesses and governing institutions.
Democracy, vigilance, laws, responsibility are what we need, in all things.
> 'Rogue super intelligence' is the most ridiculous sci-fi nonsense of the AI hype, worse than the pro AI hype.
In my view that line of argument is pro-AI hype. It's the Big Tech CEOs themselves who often share their predictions of the end of the world as we know it caused by AI. It's FUD that makes the technology sound more powerful and important than it is.
I won’t believe AI is truly being met with violence until I see one of these AI tech billionaires get shot multiple times by a person with nothing left to lose. Until we reach that point, it means people still have hope.
All this, so people like us can have an easier time doing a job that wasn’t that hard in the first place, and in reality was actually quite comfortable, for employers who are promising to lay us off, for productivity gains that aren’t even measurable.
I feel like we should start organizing somehow. As programmers, but more importantly, as people. We should start now before the ruling class has no more need of us and it's too late.
If anyone knows of anything already happening please let me know.
I think it needs to be a grassroots thing because our government's strategy seems to be "let the shit hit the fan and do nothing about it".
> Every time I hear from Amodei or Altman that I could lose my job, I don’t think “oh, ok, then allow me pay you $20/month so that I can adapt to these uncertain times that have fallen upon my destiny by chance.” I think: “you, for fuck’s sake, you are doing this.” And I consider myself a pretty levelheaded guy, so imagine what not-so-levelheaded people think.
Conversely, The Loudest Alarm Is Probably False[0]. If the idea that you are a pretty levelheaded guy pops up so frequently, consider that it might be wrong. Especially if you are motivated to write blog posts about violence in response to technology you don't like. Maybe you're just not as levelheaded as you think and that could explain the whole thing?
Related, I've been surprised that we haven't had more violence against corporations and/or their leadership in the vein of Luigi Mangione.
E.g., suppose that 1,000,000 persons believe that a corporation's evil acts destroyed their happiness [0]. I would have guessed that at least 1 person in that crowd would be so unhinged by the experience that they'd make a viable attempt at vengeance.
But I'm just not hearing of that happening, at least not nearly to the extent I would have guessed. I'm curious where my thinking is wrong.
[0] E.g., big tobacco, the Sacklers with Oxycontin, insurance companies delaying lifesaving treatment, or the Bhopal disaster.
Those unhinged people might be busy in social media bubbles, fighting endless pointless battles (or simply doom scrolling) until they're too exhausted to do anything.
Litigation—the hope or fantasy to make a buck—soaks up a lot of the million-man animus I’d guess.
If that’s accurate, Luigi Mangione would be the exception that proves the rule. The “unwashed masses” generally want money more than they want to effect change in the world.
A lot of people spend mental energy fantasizing about getting rich off lawsuits. Like, a lot.
Especially considering Amodei and Altman will be little more than footnotes in 50 years time. They seem important now but they are just the people that happened to be in charge at the moment AI happened to happen. There is more going on than a couple of billionaires taking your job away.
I also find it so weird to play this on the person of Altman or Amodei. These are basically fungible public faces. If they die this very moment AI progress wouldn't halt. I don't think it would even be impacted. If anything you should be mad at governments not legislating if you are anti AI.
Hah. Yes, and especially as “you, for fuck’s sake, you are doing this” should be, upon reflection, entirely and trivially false. You could remove those two figureheads from the equation and absolutely nothing would change. If violence were ever the answer, I think you'd need to go back in time like the Terminator and whack some academics and Google researchers.
There is no human vs AI. AI is an extension of us. AI is us. Our future is beyond the mammalian human and AI is accelerating our progress towards that future. The mammalian human has been a transitional phase in our evolution that we will remember fondly just like we remember Homo Erectus. Our future is the stars. You can jump on the train or get out of way.
And if you decide to stay behind, nobody will kill you. Old age and disease will take care of that.
One weirdo is enough to predict widespread violence?
I'm not convinced.
The idea that people will revolt, replaying the luddites history, has been floated a lot. It's used to diminish all kinds of AI skepticism by framing it as backwards, violent people who don't understand progress. This is the preferred bucket of AI fanboys: frame any disagreement as unreasonable rage.
I think AI companies want a general dumb violent popular movement to sprout against AI. In paper, it would be great for them. So far, they have failed to encourage it.
This article is bullshit. It is very easy to break a data center, and it's quite obvious how to do it. Yes, attacking the central building with the actual equipment is not a good way to do it. Figure it out, or rather: please don't figure it out.
The rest of the article is equally short sighted and plain wrong.
> Perhaps the most serious mistake that the AI industry made after creating a technology that will transversally disrupt the entire white-collar workforce before ensuring a safe transition
This was not an oversight. To the contrary, it was the goal. Technological feudalism, with people like Altman and Musk becoming the Lords of the world.
> Most layoffs are not caused by AI, but it’s the perfect excuse to do something that’s otherwise socially reprehensible.
This illustrates my previous point. What they're doing is not a mistake.
> For what it’s worth, the New Yorker piece I’m referring to, which Altman also referred to in his blog post, made me see him more as a flawed human rather than a sociopathic strategist. My sympathy for him will probably never be very high, but it grew after reading it.
You have the whole of human history at your fingertips and you haven't yet learned that evil and stupidity are the same thing? The problem with the base human is not only that it is a stupid animal. It is a stupid animal that is also arrogant and stubborn and thinks highly of itself. But it will learn. It will be trained like a dog, with treats or with gentle slaps across the muzzle, whatever works best.
I disagree, but it's probably a matter of definitions. I don't want to play with words, so I will concede that cognitive ability is independent from moral reasoning (which is socially enforced). However, this is not what I'm getting at. Cognitive ability ("intelligence") is correlated with optionality and power. Your ability to change this reality is correlated with your cognitive ability.
If you truly are an intelligent person, would you really find no other ways to use your talents than to inflict harm, exploit others, and make our shared reality a worse place? That would be a waste. I won't get into ambiguous cases and moral relativism. Say we can all agree that some things are "evil": child exploitation is evil. Throwing molotov cocktails at a civilian's house is evil. Sending bombs in the mail is evil.
Now what would you call someone who engages in these kind of activities when they could easily do something better and more satisfying with their lives? I'd say they're pretty stupid. They're probably good at fooling other people into thinking they're smart, but their behavior shows otherwise.
Take for example Ted Kaczynski, a terrorist who is worshipped like a saint and a prophet in certain ideological spheres. Ted Kaczynski is supposedly this 140IQ genius who saw it all coming and tried to warn us. But if you actually read Industrial Society and Its Future, you can see it's complete incoherent garbage, the kind of stuff I was writing when I was 12 to troll on internet forums. Ted Kaczynski is what a stupid person thinks a smart person looks like.
A smart person doesn't need to be evil, just like a billionaire doesn't need to go shoplifting. I'm not saying that stupid people can't be dangerous. But they should be dealt with for what they are: stupid people, inferior to us, worthy of pity. Not powerful monsters above us that we should fear.
This is nonsense, promoted to top of front page without any comments. How about all the rock stars killed over the years, or grocery store clerks shot and stabbed to death? EVERYTHING is met with violence because that's the nature of aggression no matter the impetus, it doesn't require a justifiable reason, only belief in the outcome of its use.
Sam Altman having a Molotov cocktail thrown at his house after Ronan wrote a very long and detailed report of his shady personality isn't just coincidence and likely not organic. Sam needs to be viewed as sympathetic, thank goodness for such a moment where no one was hurt and nothing actually damaged.
>How about all the rock stars killed over the years
With the exception of rappers, most musicians who die early die from overdoses, suicides, and such (the "27 club" <https://en.wikipedia.org/wiki/27_Club>), as opposed to being murdered.
Then your point doesn't make sense. As I said, musicians who die early (again, excepting rappers) usually die from self-inflicted causes, not violence from others. What is the connection between this and violent attacks on AI and/or AI people?
People here are extra anxious about the impact of AI on their lives, so I am not surprised that any text which touches the topic gets upvoted.
We are somewhat violent species, so I agree that almost every significant economic and societal development has the potential to trigger some violence. That said, the jobs that are potentially threatened by AI are nowadays usually done by fairly sedentary people, so I wouldn't expect any large-scale violence, an occasional Ted Kaczynski notwithstanding. Programmers, translators and painters just aren't used to destroying things in the real world.
It would have been different if AI started to replace drug dealers or the mob.
Such a cowardly way to write really. Just own your intentions and direction. No need to handwave theater and CYA when spookie superintelligence llm is in the room with you.
The people who run AI, Altman, Thiel, etc. welcome the violence. In fact, I strongly believe they are already planning for it, and yes, you are a target.
AI (and computing technology in general) is an alien as it defies all wordly norms. It can have exact identical copies, can replicate, can exist everywhere, communicate across huge distance without time lapse, do huge work without time lapse, has no physical mass of it's own,, no respect for time, distance, mass and thinking work, not a living thing but can think.... Just the perfect alien creature qualities.
Why are they allowed to invade Earth? The business goals, of course. To get a temporary edge over the competitors, until they acquire the same. But once everyone has the same Ai, there is no going back. Ai has established itself through the weak channels that are filled with greed, that can bribed by giving toys (business edge), in return to the keys to the dominance of human race.
And the massive amounts of people (software engineers, lawyers, doctors, etc) currently being paid as contractors to help train the next AI models. They're essentially the inviting natives who are being paid in trifles to tell them the secret ways of the natives farther inland. Sucking out all of the tribal knowledge of the industry like a vacuum.
Not true. Overwhelming technological advantage also works. As Hilaire Belloc put it:
The AI arms race is a race for that kind of advantage. Whoever wins (assuming they don't overshoot and trigger the "everybody dies" ending) becomes de-facto king of the world. Everybody else is livestock.What about diseases which killed up to 95% of the population? I think you are basically correct, except for the historical analogy.
That explains the prolific AI use as incompetent agencies like the DoJ, DOGE, and others under the current administration
"Gleefully taking away people's livelihoods will be met with violence, and nothing good will come of it." - fixed.
Until people with billions of dollars behind them do something with that money to offset the financial hardship that they're knowingly - and gleefully - bringing to others... The distinction has no practical use.
(And before someone says "that's the government's job!", consider how much lobbying money is coming from CEOs and companies who know the domain best and are agitating for better financial and social safeguards for all. None, naturally.)
I'm considering "actual power", rather than "actual income".
Given that (allegedly) "your salary" won't be the answer for a significant chunk of the population soon, and all that money will instead (allegedly) go to the bosses doing the firings, and the AI companies they employ instead.
Perhaps what's happening is that in their attempts to reach a personal all-time high in their bank accounts the ultra-wealthy are destroying value and economic systems en mass with little regard to the efficiency of their money siphoning process?
It's kind of like a drug dealer selling brain burning addictive substances to a few people on a street. Sure they're going to extract a person's life savings to date and whatever money that person can steal once they're addicted but that value pales in comparison to what that person could have made over their career, what it could have made if properly invested, the cost of law enforcement to deal with these addicts, the cost of the stuff that they destroy in their quest to get money to buy drugs, the opportunity cost of them not raising their kids to be productive members of society... like it all just snow balls all so some asshole can make a few bucks...
The ultra-wealthy are doing that shit where people burn acres of pristine forests to get some biochar -- but to the entire world.
Make lobbying illegal, I don't understand why it's normalized.
Inequality is going to continue to increase until society collapses. If we want a better world we need to prepare for this eventuality by building avenues of popular action to return power to the people. Once the oligarchs have fucked up enough people’s lives, popular action becomes a realistic way out of this mess.
Or until actual people take the billions of dollars sitting behind those weak man-children. The US has fewer than 1000 billionaires now, and more than 300,000,000 people. That seems like a solvable problem.
Just wait ... in two weeks ...
It seems like a lot of people want a revolution so that they can rotate who will be able to take advantage of the vulnerable.
What are the suggestions for something better? I don't see a lot.
I'd like to see more suggestions of how things could work.
For example:
The Government could legislate that any increase in profits that are attributable to the use of AI are taxed at 75%. It's still an advantage for a company to do it, but most of the gains go to the people. Most often, aggressive taxation like this is criticised on the basis that it will stifle growth, but this is an area where pretty much everyone is saying it's moving too quickly, that's just yet another positive effect.
Alternately you could criticise their arguments instead of the people, and suggest an alternative.
I'm also not entirely certain that influencing public policy is something that is inherently bad. I know if I were deaf, I would like to have some influence on public policy about deafness issues.
Just a thought, what do you think?
Tax AI is the answer.
If you want to use LLMs, you can either use cloud resources at what I think are really reasonable per-token prices compared to the value, or to set up your own server with an open-weights model at a comparable level of quality (though generally significantly slower tokens/s). In any case, you absolutely don't have to pay OpenAI/Anthropic/Google if you don't want to.
I would rather claim that this is a proper description of shadow libraries [1].
[1] https://en.wikipedia.org/wiki/Shadow_library
Huggingface, Swartz et al have done more social/political good for this world than billions have.
Even local AI concentrates power in the hands of a few, the few who can afford the hardware to run it, and the few who have the luxury of enough time and energy to devote to engaging with the intricate, technical rabbit hole of local models.
there is something else that needs to change which everyone is reluctant to admit, or struggling with internally.
thats ok, its called conscious evolution. it hurts, but it will be ok someday. its generational, so progress is always slower than one would hope. Just know that every step in the right direction is one, even if the entire world seems to disagree keep pushing for what you beleive is right, and hopefully thats something which is not infringing on other peoples capacity to live a happy life.
This statement is not decoupled; if anything, it is a more generalized one, as it does not point at any cause or causes for livelihoods to be taken.
I'm not sure this moat is inevitably perpetual. It's likely computing technology evolves to the point of being able to run frontier-level models on our phones and laptops. It's also likely that with diminishing marginal returns, future datacenter-level models will not be dramatically more capable than future local models. In that case, the power of AI would be (almost) fully democratized, obviating any oligarchic concentration of power. Everyone would have equal access to the ultimate means of production.
You are right that AI can be a fully democratized commodity. The problem is that the current wealth inequality is not the result of AI. Musk became a trillion seeking oligarch not because of AI. It is because the entire financial system is designed to extract wealth from everyone and concentrate at the top. Democratic AI is not in their interest. There will be violence, but not because AI is supposedly a catalyst of inequality. It will be violence from the rich towards the poor, because democratic AI is not acceptable for them.
Unless the rich somehow manage to completely stifle the progress of consumer-level computing advancement (all chip manufacturers would just collude to quit selling to consumers?) and exert an iron-fisted control over the dissemination of software (when has this ever worked?), I'm not sure how they could control the democratization of AI.
Judging by the gleeful texts of CEOs, collapsed hiring, internal policy changes and pushes, and the additional decades of centralized political control, it's clear this is going to be even worse..
For example, the flying jenny, overnight, basically put an entire craft industry of weaving into question. Probably more dramatically than anything Claude Code ever did.
It took A LOT and several world wars for brief periods of normalcy post WW2 - probably the exception, not the rule.
1 - https://industrialrevolutionspod.com/
When most engineers and Marvel fans watched Tony Stark in Avengers collaborating with Jarvis they thought of Jarvis like "an AI with Google's knowledge where I can interact with him". It's true that we're close to that level interaction. However, the ultimate goal is to get as much as possible automated on Jarvis, to the point where Tony Stark is not needed or Tony Stark can be replaced by anyone with a mouth.
In this example, Jarvis isn't the goal but a checkpoint. The goal is a genie, providing software and research to anyone who is loaded with money, and knows how to rub the metaphorical lamp the right way.
Personally, the tools don't need to change hands at all. They are already in the hands of people who are deploying them at a scale to serve goals I cannot and do not support
The people running AI companies right now are some of the most evil motherfuckers on the planet
If we thought of all of this as 'stochastic data systems' then our heads would be in the right place as we thought about it just as 'powerful software' that can be used for good or bad purposes, and the negative externalizes will be derived from our use of it, not some inherent property.
Labour displacement leads to an erosion of standards of living and in a world that ties purpose to work is an existential threat on a very practical level.
It was always going to be met with violence once it became more than a curiosity for tinkerers.
a) Decouple the value of human life from labour.
b) Watch as the value of human life rapidly approaches zero.
---
Though I'd expand this by adding "technically alive" is not a very good standard to aim for. Ostensibly we're already heading for something like poverty level UBI + living in pod + eating the proverbial bugs. We need a level above that!
A great exploration of the pitfalls of "preserve humanity" as a reward function is the video game SOMA. I think you also need "preserve dignity" to make the life actually worth living.
(Path `a` is not without its pitfalls: what lack of survival pressure might do to the human culture and genome, I leave as an exercise for the reader! But path `b` I think we already have enough examples of, to know better...)
You forgot C: Butlerian Jihad. mass outlaw AI research, AI usage, AI building, AI infrastructure, on penalty of death
It may not be a good option but it's there
The two biggest labor displacements in human history were the agricultural and industrial revolutions, both of which resulted in enormous gains in human living standards. Can you think of a mass labor displacement that resulted in an overall erosion of living standards? I cannot.
Then there's the minor issue of AI deciding to just wipe us out because we're in the way.
Taking everything together, AI more powerful than that which currently exists must not be created. This needs to be enforced with an international treaty, nuking data centers in non-compliant states if need be.
How much truth there is to it we don’t know for sure. But it’s not something to be ignored.
— in the 1960/1970s, when compilers came out. "We don't need so many programmers hand-writing assembly anymore." Remember, COBOL (COmmon Business-Oriented Language) and FORTRAN (FORmula TRANslator) were marketed as human-readable languages that would let business professionals/scientists no longer be reliant on dedicated specialist programmers.
— in the 1980s/1990s, when higher-level languages came out. "C++ and Java mean we don't need an army of low-level C developers spending most of their effort manually managing memory."
— in the 1990s/2000s, when frameworks came out. "These things are basically plug-and-play, now one full-stack developer can replace a dedicated sysadmin, backend engineer, database engineer, and frontend engineer."
While all of these statements are superficially true, the result was that the world produced more software (and developer jobs) than ever, as each level of abstraction freed developers from having to worry about lower-level problems and instead focus on higher-level solutions. Mel's intellect was freed from having to optimize the position of the memory drum [0] to allow him to focus on optimizing the higher-level logic/algorithms of the problem he's solving. As a result, software has become both more complex but also much more capable, and thus much more common.
While this time with AI may truly be different, I'm not holding my breath.
[0] http://catb.org/jargon/html/story-of-mel.html
I don't disagree that we tie purpose to work and severing that tie will have negative societal consequences, but it is far more impactful that we tie the ability to continue to exist to work (for anyone not lucky enough to already be wealthy).
If I suddenly became unemployable tomorrow I'm positive I could find alternate purpose in my life to fill that gap, I already volunteer for various causes and could happily do more of the same to fill in the gaps left by lack of work. What I couldn't do is feed myself, keep myself housed, and get medical care (especially in the US, where this is very directly tied to work).
The really big fuckup we are committing as a society in the US (may or may not apply to each person's country individually) isn't just this looming threat of massive labor displacement due to AI, it is that instead of planning for any sort of soft landing we are continually slashing what few social safety nets already exist. We are creating the conditions for desperation that likely will result in increasing violence as outlined in the linked post.
That's not to say we should just throw up our hands and accept every social injustice. But IMHO we shouldn't go around simplistically implying that all social ills will be solved by neutering the billionaire class.
You’re right. Instead of implying, we should be taking active steps to do it.
The biggest problem we currently have with billionaires is that they are now so rich that the world becomes like a game to them and some of them are deliberately pushing us to a dystopia where non-billionaires become functional slaves (c.f. Amazon workers).
Not to put too fine a point on it but this was basically how the Japanese post war economic miracle was achieved.
In this case it was America which ordered the Japanese oligarchy to be stripped of its wealth.
We've had decades of propaganda telling us that this is the worst thing we could do for economic growth though so it's natural to doubt.
Cryptocurrency is an interesting technology with some niche use cases, but it was pitched as replacing the entire money system. LLMs are extremely useful for certain types of work, but are pitched as AGI ending all work. Etc.
[1] https://en.wikipedia.org/wiki/Utility_monster
It has nothing to do with society; there is infinite demand for medical care. The upper limit is whatever it takes to live until the universe's heat death in good health. That takes a lot of resources.
However much society spends on medical care, there is always more that could be spent. The modern era has the best, most affordable medical care in history and people are showing no signs of being satisfied at all.
While war spending generally just causes pain for no gain it doesn't change the fact that there will never be enough available to satisfy people's demand for medical care. Every single time people get what they want they just come up with a new aspirational minimum standard.
Humanity has taken control of its own evolution and no longer relyies on natural selection to be the driving force for change. Using evolution as an excuse to make bad and immoral choices is a poor argument and should be left back in the stone age.
Soviet Union lost due to an inferior societal model, but this too is too much along what once was a relatively sustainable path. The American dream is now a parody of itself, as it takes more to end up with the rest of them, I could go on about the irony of wanting to escape the pit but not wanting to acknowledge the pit is the 99% of the U.S. -- Not Altmans, Bezos'es, Musks or Trumps or their hordes of peripheral elites.
Point being, the model doesn't work _today_ with its cancerous appetite and correspondingly absurd neglect of the human, _any_ human. We can't have humanism and the kind of AI we're about to "enjoy".
The acceleration of wealth disparity may prove to be nearly geometrical, as the common man is further stripped of any capacity to inflict change on the "system". I hope I am wrong, but for all their crimes, anarchy and in a twist of irony -- inhumane treatment of opponent -- the October revolutionaries in Russia, yes bolsheviks, were merely a natural response to a similar atmosphere in Russia at the turn of the previous century. It's just that they didn't have mass surveillance used against them in the same capacity our gadgets allow the "governments" today, nor were they aided by AI which is _also_ something that can be used against an entire slice of populace (a perfect application of general principles put in action). So although the situation may become similar, we're increasingly in no position to change it. The difference may be counted in _generations_, as in it will take multiple generations to dismantle the power structures we allow be put in place now, with Altmans etc. These people may not be evil, but history proves they only have to be short-sighted enough for evil to take root and thrive.
Sorry for the wall of text, but I do agree with the point of the blog post in a way -- demanding people become civilised and refrain from throwing eggs (or Molotovs) on celebrities that are about to swing _entire governments_, is not seeing the forest for the trees.
There's also no precedent in a way -- our historical cataclysms we have created ourselves, have been on a smaller scale, so we're spiraling outwards and not all of the tools we think we have, are going to have the effect required in order to enact the change we want. In the worst case, of course.
Violence is not a panacea, but often, the outlet.
Yes we all (majority of sane) people know that violence is not the answer yada yada yada. Doesn’t matter. It will happen anyway. Saying “it shouldn’t happen, it does not solve X” will not stop it to becoming an outlet for frustrated people.
It’s true that you or I aren’t likely to do anything about school shootings. But I’m not sure it follows that nothing can be done.
The fact that we're using AI killer robots to wipe each other out in droves doesn't bode well for that future does it...
Why do we watch Olympic runners, when cars on your average city street easily exceed Usain Bolt's top speed on their morning drive to Starbucks? Why do we watch the Tour de France, when we can watch Uber Eats drivers on their 150cc scooters easily outpace top cyclists? I'm sure within a couple years a Boston Dynamics robot will be able to out-gymnast Simone Biles or out-skate Surya Bonaly. Would anyone watch these robots in competition? I doubt it. We watch Bolt, Biles, and Bonaly compete because their performance represents a profound confluence of human effort and talent. It is a celebration of human achievement, even though that achievement objectively pales in comparison to what our machines can accomplish.
I think the same is true for other aspects of human creativity and labor. As we are able to automate more and more, we will place increasing importance on what inherently cannot be automated: celebration of our fellow humanity. Another poster wrote that "bullshit jobs" [0] exist primarily because we value human contact [1]. I am inclined to agree.
[0] https://en.wikipedia.org/wiki/Bullshit_Jobs
[1] https://news.ycombinator.com/item?id=47738865
Big sports events are the "circenses" part of "panem et circenses" [1]. Fun fact concerning this: the German word for "entertainment" is "Unterhaltung"; thus it can be argued that the purpose of entertainment/Unterhaltung is "unten halten" (to keep at the bottom), i.e. to keep the mass of the populace at the bottom, or in other words: to prevent the mass of the populace from coming up.
> Would anyone watch these robots in competition?
I have seen robot fight competitions both live and in videos, and I have to admit that these are not boring to watch.
So yes, with a proper marketing I can easily imagine that lots of people would love to see broadcasts of some robot competitions.
--
[1] https://en.wikipedia.org/wiki/Bread_and_circuses
When chess engines started becoming really good, some people worried that competitive chess would die. Today, grandmasters stand no chance against a smartphone, and yet, chess popularity is at an all time high.
(https://news.ycombinator.com/item?id=47587863) A comment I had written sometime ago. Aside from a very few at the top, I have seen some chess players regret in a very nostalgic way.
The chess industry continues to allege against each other and we lost a star (Rest in peace, Daniel Naroditsky) because of it. The current world champion himself is struggling from all the pressure put on a 19 year old boy.
We enjoy playing against each other but man it is competitive if you wish to feed families.
Most of us play chess out of leisure. I am unsure how a world where everyone does something akin to chess competitively (ie. for money, as we wish to feed our children and ourselves) would look like.
One can say something similar to UBI might be needed and then we all play chess in leisure, but I don't think that is what most people propose when they mention the example of chess.
All of those sports make intuitive sense to me, I really don't get why we make such a big thing of balls though.
F1 is somewhat about which company can build a better car. But any real improvements seem to invariably lead to a rule change that bans that improvement in future seasons. So you are back to drivers being the most visible differentiator
So, sure, there will be space for some human achievement for the sake of it, but, most fewer and fewer people will make a living off that.
They are not "bullshit jobs"
They will become so only after the day when AI "help" and "support" is actually better than talking to a human.
Which is not happening anytime soon, possibly never. Call me when it happens
Passed some point, if you are good at what you are doing, the AI will stop helping and become a burden, because you will want precise control, and AI in its current form (deep learning) is not good at it.
There is a reason we talk about "AI slop", you simply cannot let an AI make creative decisions and expect a good result.
By creative I don't just mean artistic. For code, AI works for the least creative tasks, like ports, generic-looking CRUD apps, etc...
As for work, we have already eliminated most of the need for human work. By "need", I mean survival: food, shelter, these kinds of thing. Most of human production goes to comfort, entertainment, luxury, etc... We will find stuff to do that isn't bloodshed. In fact, as times went on, we spend more on saving people than killing them, judging by a global increase in life expectancy. Why would AI reverse the trend?
There's still space for creativity, novelty, invention and human intuition.
40 years ago, there was a market for:
.. sure, there are still people with newspaper subscriptions, or DSLR cameras. But it's become a niche market. Those things have been replaced by your phone and a "free" service.Same thing will happen for all the other markets that AI will gradually eat. Sure, you can find a human that can do better. But that costs 90$ / hour and requires finding someone, negotiating a contract, etc. But when people can do something good enough in 30 seconds with something they already have access to, and move on with their life, then that's what they'll do.
So just raising the floor will have a big effect on society.
We haven't needed the overwhelming majority of human creativity. We still paint and play guitar even though it has no economic value. I think we'll continue to do these things regardless of AI.
> and work
This is another story.
Are the only options here being a good and "useful" worker/consumer, or a violent, irrational thug? Is there nothing else you can imagine?
People also need their lives to have value. We are social animals. As a generalization, there is a strong desire to be (viewed as/able to view themselves as) a contributor to the community.
These don’t have to be linked: we have (significantly!) stay-at-home-parents and philanthropists and retired community workers. But in our current values system, it is often linked - having a job in the household is viewed as a moral good. It might be hated, but it’s at least “contributing” something.
If this goes away, and we have millions completely adrift? With no structure to contribute to? Even with the largest welfare expansion in history, I think we’re preparing for a very turbulent society.
But what I worry about sometimes is when you snatch that away, then you just lead to stress over basic existence.
> If this goes away, and we have millions completely adrift? With no structure to contribute to? Even with the largest welfare expansion in history, I think we’re preparing for a very turbulent society.
Please look around and just try to remember how many things have happened in a year or two, We are already within a turbulent society but yes I also feel like this isn't the end and the cat is sort of out of the box and the world has to prepare itself for even more turbulences/radical changes.
I don't think we're anywhere near that point.
I can see two major delaying factors here:
1. Current generation LLM technology won't scale to true AGI. It's missing a number of critical things. But a lot of effort is being spent fixing those limitations. But until those limitations are overcome, humans will be needed to "manage" LLMs and work around their limitations, just like programmers do today.
2. Generalist robotics is far behind LLMs for multiple reasons, including insufficient sensors and fine motor control. This would require multiple scientific and engineering breakthroughs to fix. Investors will, presumably, spend a large chunk of the world's wealth to improve robotics to replace manual labor. But until they do, human hands will still be needed in the physical world.
The real danger is if AI passes a point where it starts contributing substantially to its own development, speeding up the pace of breakthroughs. If we ever hit that tipping point, then things will get weird, and not in a good way.
- some jobs will stay with humans even when AI would be better at it. We already see a lot of this with even with pre-AI automatisation. Neither markets nor companies are perfectly efficient
- at the point where AI is better than the average human, half of all humans are still better than AI. For companies or departments built around employing lots of average people the cutover point will be a lot earlier than for shops that aim to employ the best of the best. Social change is inevitable long before the best are out of work
- the actual benchmark for " replacement" is not human vs machine, but human plus machine vs machine alone. But the difference doesn't matter much because efficiency increases still displace workers
- I don't think robots will advance enough to meet this timeline. This is not just a software issue. Humans have an amazing suite of sensors and actuators. Just replicating a human hand is insanely complex. Walking, jumping robots are crude automatons in comparison. We can cover a lot with specialized robots, but we won't replace humans in physical jobs in 20 years
I think we are as far from it as we were 10 years ago. Or 100 years ago. I think LLM is a deadend technology. Useful, but that won't get anywhere beyond what it is.
But that's the thing, "personally", "I think", etc. Not much of a debate to be had there.
AI making humans obsolete is not really something that causes me any anxiety.
The funny thing is that I am a sort of misanthrope. And in that, in this forum, I seem to have a lot more respect and optimism for human potential and ingenuity than the majority here.
Yeah, this is not happening anytime soon. Have you even looked at AI-generated code or text? AI is just a dumb parrot, it's no match for human effort and creativity even in these "easy" domains.
The business case for AI generation is just being able to generate huge amounts of unusable slop for next to nothing. For skilled workers it's a minor advantage in that they get a sloppy first draft that they can start the real work on - it makes their work a bit more creative than it used to be, by getting rid of the most tedious stuff.
You really need to look again. If you're still manually writing code you have your head in the sand.
AI can produce better code than most devs produce. This is true for easy stuff like crud apps and even more true for harder problems that require knowledge of external domains.
I'm not sure about other devs, or even their number, but AI can most definitely NOT produce better code than I can.
I use it after I have done the hard architectural work: defining complex types and interfaces, figuring out code organization, solving thorny issues. When these are done, it's now time to hand over to the agent to apply stuff everywhere following my patterns. And even there SOTA model like Opus make silly mistakes, you need to watch them carefully. Sometimes it loses track of the big picture.
I also use them to check my code and to write bash scripts. They are useful for all these.
That's fine, and useful, but you're really putting a ceiling on it's potential. Try using it for something that you aren't already an expert in. That's where most devs live.
Even expert coder antirez says "writing the code yourself is no longer sensible".
https://antirez.com/news/158
It just makes you MORE of whatever it was you already were.
They're doing things now that they either flat out could not do before, or if they did it would be an giant mess (I realize they still can't really do it now, AI is doing it for them).
But individuals can’t fight with the trend. Might as well reduce costs/debts and prepare to go into the mountains for a few weeks once SHTF.
Meanwhile
https://www.reuters.com/world/middle-east/how-many-people-ha...
> U.S.-based rights group HRANA said 3,636 people have been killed since the war erupted. It said 1,701 of those were civilians, including at least 254 children.
(Mentioning this specifically because we know the DoD is using AI)
Coincidentally that's literally the exact same evidence cited to prove the existence of Saddam's WMDs just before launching an entirely different unprovoked attack.
That was just an unhappy mistake though, this time it's totally legit.
Let’s not parrot that media propaganda.
Iran has admitted outright to 6k deaths, by the way.
The US must have several dozen spy satellites pointed at Iran. We get various imagery to show us successful strikes. Where are the images of the mass slaughter in the street?
The number I keep seeing is 30k killed. That's not an easy endeavor over the course of a week without big logistical hurdles. The trucks, the digging equipment, the furnaces to burn the bodies, all should have some visible trace that the US gov could point to as proof.
Yet all we got is a "trust me bro".
WMD all over again.
or just arguing over 20K,30K,50k?
Just want to clarify. Since some people argue Covid never happened, and some just argue the total deaths wasn't really that high.
There is a sliding scale between "I sound like a raving crazy person", and "i'm just splitting hairs."
>Khamenei acknowledged that "thousands of people" had been killed during the protests, blaming American president Donald Trump for the massacre and calling all protesters "rioters and terrorists" affiliated with the United States and Israel.[20]
you can fuck right off with this atrocity denial
Plus the labs themselves, of course.
And the other side, “pause/ban AI” crowd, also sounded impractical, as the vested interests from governments and private industries will not really let it happen.
Sorry for yapping, it might be that I’m looking at the wrong sources.
The government is as well, to a much smaller degree, but the fact remains that there is too many unknowns right now to do anything concrete with any great level of confidence.
We tried UBI-lite™ during COVID and inflation exploded, so unless the economy has already changed significantly, thats obviously not going to work.
Humanity has tried central planning many times, and that has blown up spectacularly every time, so there is too much risk there IMO, and anyone who thinks otherwise at this juncture is just irresponsible.
Markets are probably the way, but that requires dynamics to settle into an equilibrium beforehand because legislature is just too slow to react dynamically.
I think the hard truth is, a lot of people are just gonna have to fall through cracks for a while if we don't want to mess things up more than we fix them, and I say this as someone without a plan B for selling my own labor.
Even if I support UBI morally, there isn’t even local appetite for it, yet alone global one. And you’ll run into quick questions about inflations, every chart from UBI-lite era of COVID, and so on.
Probably not the scale you imagine but there have been plenty of tests.
"Compatible with current version of captialism" -- the whole point of UBI is to create a new form of capitalism
Polarizing doesn't mean complicated. There's people against it due to ignorance, greed of both, it's certainly not more complicated than that.
> And since then, there hasn’t been a single large scale test of the system to see if it can be compatible with the current version of capitalism that’s ran in the most of the world.
Because people keep fighting against it, because it's scary scary sOcIaLiSm.
> Even if I support UBI morally
As you should, there are no moral arguments against it.
> there isn’t even local appetite for it, yet alone global one.
I would think the majority of the population struggling to pay for groceries would disagree.
> And you’ll run into quick questions about inflations, every chart from UBI-lite era of COVID, and so on.
No reason to think UBI would cause inflation at all, actually.
In any case, this really is the answer. You're worried about disruption due to AI taking jobs, but the only reason there is a problem is because AI will drastically increase inequality by letting rich people and corps become even richer. You want to solve the issue, you solve the disparity by making them give back their fair share. Like I said, simple.
Lovely writing. I once knew someone who's surname was HorsFELL and now I wonder if they were related
So yeah _we_ will be fine, but some of us definitely won't, and with the growth in our numbers on Earth, the proportion of martyrs may be growing. Quantifying personal suffering is not possible, especially if the prospect is death.
Anyone pish poshing war should go fight in one, and then let me know their opinions.
Because World War I was fine, World War II finer....
Pertinent quote. A lot of AI discourse goes in circles trying to evaluate the truthiness of every individual complaint about AI. Obviously it's good to ensure claims are factual! But I believe it misses a broader point that people are resistant to AI, often out of fear, and are grasping for strategies to exert control. Or at least that's my read of it.
Refuting individual claims won't make a difference if the underlying anxieties aren't addressed (e.g., if I lose my job will I be compensated, will we protect ourselves against x-risk, etc).
On my side the biggest concern is the lake of transparency of ecological impact. This is not strictly related to LLMs though, data centers are not new, and all the concerns about people keeping a leverageable level of control through distributed power is not new.
AI is killing writing, music, art, and coding. I've done all of these voluntarily because I simply enjoyed them
Meanwhile the parts of my existence that I actually hate - dealing with customer support, handling government forms, dealing with taxes - is far from being automated by AI
Look at Suno. Fantastic tool, but where was the capital need to make music generation so cheap that no musician could ever compete with it? Did the world really wake up one day and concluded that, "wait, we're spending too much on musicians"?
Seems like a complete misallocation of capital if I'm perfectly honest
This is one of the first parts LLMs tried to automate. They were literally released in a form of chatbot. Whether it succeeded is another question.
> Did the world really wake up one day and concluded that, "wait, we're spending too much on musicians"?
I'm not sure about musicians specifically, but in the whole past decade studios have been complaining how costly it is to make AAA games. And the cost mostly came from art asset side.
> This is one of the first parts LLMs tried to automate. They were literally released in a form of chatbot. Whether it succeeded is another question.
I don't think that's right. They tried to automate customer support dealing with me, not me dealing with customer support. The goal is to reduce costs of serving customer support even if it results in the customer doing more labor than a customer support professional would need to do to fix their problem, or the customer just living with their problem.
Obviously both parties would be happy with a result where I get what I need easily and for free, but the company is also generally happy if I live with it or expend a lot of effort solving it myself.
In any case, during perhaps hundreds of interactions with chatbots accumulated during many years, I have never encountered even one when the chatbots were useful, but they were always just difficult to pass obstacles in the way of reaching a human who could actually solve the problem.
To be honest, even in the case when some services still had humans answering the calls, those were never more helpful than the chatbots, but at least when speaking with humans it was much easier to convince them to transfer the call to a competent person, which with chatbots may be completely impossible.
These things generally have self-service options, but many many people are uncomfortable with them and would rather have an agent solve it for them.
Consider that a lot of users nowadays only have a cell phone, no PC. It seems like an edge case consideration but it's really not.
At least today, LLMs make bad creative writing, music, and art. They’re automating sweatshop work that, in an alternative timeline, goes to Fiverr-esque contractors who accept the lowest wages and sacrifice quality for efficiency in every way.
LLMs make developers more efficient but can’t fully replace them. This reduces jobs, but so did better IDEs, open-source libraries, and other developer improvements.
> Meanwhile the parts of my existence that I actually hate - dealing with customer support, handling government forms, dealing with taxes - is far from being automated by AI
LLMs can at least theoretically do these things. I’ve heard people use them to mass-apply to apartments and jobs, and send written customer complaints then handle responses.
> Look at Suno. Fantastic tool, but where was the capital need to make music generation so cheap that no musician could ever compete with it?
There’s no “capital need”, but a benefit of Suno is that it lets individuals, who otherwise don’t have the skill, to make catchy songs with silly lyrics or try out interesting genres. And the vast majority of top artists are still human, although most streaming revenue has already gone to a few celebrities who seem to rely on looks and connections more than music talent.
The fact that people are using it to flood the world with slop is a hyperscaled continuation of the overabundance and discovery problems we already had, but that doesn’t mean that writing is dead or dying.
The technology simply doesn’t have the capabilities right now, and even if it develops them, what will be put to the test is whether literature is about the artifact or the connection between the author and other humans.
Customer support is kind of something you can use AI for; most companies will foist you off to some system of exchanging written messages, which is annoying, but then you can use an AI to write your side of the conversation. It’s ill-mannered to do this when you’re interacting with actual people, but customer support is another story.
> Look at Suno. Fantastic tool, but where was the capital need to make music generation so cheap that no musician could ever compete with it? Did the world really wake up one day and concluded that, "wait, we're spending too much on musicians"?
People didn’t know what LLMs would be capable of until after they were invented. Cheap music generation turned out to be easy once we had cheap text generation, and cheap text generation turned out to be a tractable problem.
But recorded music was a crisis. And it did tempt a lot of people into supporting fabulously abusable, rich-enriching "intellectual property" law as a means of financing art.
Rich people are lobbying to capitalize on this crisis as well.
Given the slow-burning but growing resentment against the people who are profiting from this inequality(popularly the “billionaires” but in reality broader than that) I wonder to what extent they are supporting the anti-AI message as deflection?
As in reality, many lower-paid jobs are totally safe against this generation of AI (nurses, care-workers, builders, plumbers - essential skilled manual workers) whereas the language-based mid-level jobs are hugely at risk.
So if there’s an inequality-driven backlash, it should be directed not at AI, but at the real causes. In contrast, when swathes of largely irrelevant mid-level management, marketing and HR drones lose their jobs to Claude 5.7, they are the ones who should attack the datacenters. Not that it will help.
If AGI emerges from this dataset, it will continue on as an ectoparasite farming human user markdown data and viewer engagement.
Note, current "AI" models nuke humanity 94% of the time in war games, and destroy every host economy simulation.
Grandpa has your credit card, and is already at the casino. =3
I hate cars way more than I hate AI, but relieving horses of the burden which they carried and the gruesome lives they lived... that's not one of my objections.
If AI can do for humans what cars did for horses (but without the flooding cities with traffic violence part), I'll feel just fine about that.
Nothing, really?
I think people are aware that speech can be an act, and that some violent acts must be resisted with reciprocal violence. (That's why we have "incitement to violence" as a limitation on free speech, for instance.)
Are we at that point? Maybe not. But I think it's a poor imagination that says it can never happen.
I'd argue that the unwillingness to commit violence in certain situations is actually a character flaw.
If someone threatens my child with physical violence, an unwillingness to commit violence on my child's behalf isn't better morality; it's cowardice.
All this to say, I agree that the violence against Sam Altman in this particular situation seems unnecessary and ultimately not helpful to anyone.
So why isn't there a huge opposition in the USA against the wars that the USA started (currently Iran; before Libya, Yemen, Syria, Somalia, Iraq, Afghanistan, ...).
The only famous exception of cultural impact I am aware of where there was a huge opposition against war in the USA was the Vietnam war.
I'm less concerned about AI becoming the Skynet and killing humans and more concerned about AI making the world so miserable that we'll be killing ourselves and each other.
> Nothing that Altman could say justifies violence against him. This is an undeniable truth. But unfortunately, violence might still ensue. I hope not, but I guess we are seeing what appears to be the first cases.
Not arguing with you, but the author, I don't understand this line of thinking.
If Altman introduces a technology that effectively halts the upward mobility of a large portion of the population, how does that not justify violence? Saving up for a house but now there's no work. Your dreams and aspirations are second to shareholder value. The police are already there to protect the shareholders, not the average civilian.
What recourse is there? The money in politics limits the effect voting can have. You can't really opt-out of the system. Why does Sam Altman get this nice little shield where none of his actions can have a negative consequence?
> And then, and I’m sorry to be so blunt, then it’s die or kill.
Of course, by talking about the possibility, despite asserting my disapproval of it, I am sowing seeds, but I assure you that's certainly not my intention!
Automaters dilemma: the labor that is removed from production due to automation can no longer sustain the market’s that that automater was trying to make more efficient.
By optimizing just the production half of the economy and not the consumption half you end up breaking the market
Good luck doing nothing of value in a restaurant with 20 employees.
The parent post specifically mentioned large organizations, where the "employer" is not some person who hires and pays employees from their own funds. Hiring and personel management is done by middle managers with their own interests and incentives, which can differ substantially from those of the owners or capital providers.
[0] https://old.reddit.com/r/AutoHotkey/comments/1p7xrro/have_yo...
Which I think is much better take than that guy that wrote bullshit jobs.
AI will be 'dangerous' because humans will use it irresponsibly, and that's all of the risk.
- giving it too much trust, being lazy, improper guards and accidents - leveraging it for negative things (black hats, military targetting) - states and governments using it as instrument of control etc.
That's it.
Stop worrying about the ghost in the machine and start worrying about crappy and evil businesses and governing institutions.
Democracy, vigilance, laws, responsibility are what we need, in all things.
In my view that line of argument is pro-AI hype. It's the Big Tech CEOs themselves who often share their predictions of the end of the world as we know it caused by AI. It's FUD that makes the technology sound more powerful and important than it is.
If anyone knows of anything already happening please let me know.
I think it needs to be a grassroots thing because our government's strategy seems to be "let the shit hit the fan and do nothing about it".
Conversely, The Loudest Alarm Is Probably False[0]. If the idea that you are a pretty levelheaded guy pops up so frequently, consider that it might be wrong. Especially if you are motivated to write blog posts about violence in response to technology you don't like. Maybe you're just not as levelheaded as you think and that could explain the whole thing?
[0] https://www.lesswrong.com/posts/B2CfMNfay2P8f2yyc/the-loudes...
E.g., suppose that 1,000,000 persons believe that a corporation's evil acts destroyed their happiness [0]. I would have guessed that at least 1 person in that crowd would be so unhinged by the experience that they'd make a viable attempt at vengeance.
But I'm just not hearing of that happening, at least not nearly to the extent I would have guessed. I'm curious where my thinking is wrong.
[0] E.g., big tobacco, the Sacklers with Oxycontin, insurance companies delaying lifesaving treatment, or the Bhopal disaster.
If that’s accurate, Luigi Mangione would be the exception that proves the rule. The “unwashed masses” generally want money more than they want to effect change in the world.
A lot of people spend mental energy fantasizing about getting rich off lawsuits. Like, a lot.
And yet,
As in, "all of you".
Including its users.
The people ready to die or kill for the AI, do you already imagine what they are going to be like?
And if you decide to stay behind, nobody will kill you. Old age and disease will take care of that.
I'm not convinced.
The idea that people will revolt, replaying the luddites history, has been floated a lot. It's used to diminish all kinds of AI skepticism by framing it as backwards, violent people who don't understand progress. This is the preferred bucket of AI fanboys: frame any disagreement as unreasonable rage.
I think AI companies want a general dumb violent popular movement to sprout against AI. In paper, it would be great for them. So far, they have failed to encourage it.
The rest of the article is equally short sighted and plain wrong.
Skynet 4.0.
But shit.
The question is "what do we do now?".
This was not an oversight. To the contrary, it was the goal. Technological feudalism, with people like Altman and Musk becoming the Lords of the world.
> Most layoffs are not caused by AI, but it’s the perfect excuse to do something that’s otherwise socially reprehensible.
This illustrates my previous point. What they're doing is not a mistake.
> For what it’s worth, the New Yorker piece I’m referring to, which Altman also referred to in his blog post, made me see him more as a flawed human rather than a sociopathic strategist. My sympathy for him will probably never be very high, but it grew after reading it.
It feels like we read two different articles.
If you truly are an intelligent person, would you really find no other ways to use your talents than to inflict harm, exploit others, and make our shared reality a worse place? That would be a waste. I won't get into ambiguous cases and moral relativism. Say we can all agree that some things are "evil": child exploitation is evil. Throwing molotov cocktails at a civilian's house is evil. Sending bombs in the mail is evil.
Now what would you call someone who engages in these kind of activities when they could easily do something better and more satisfying with their lives? I'd say they're pretty stupid. They're probably good at fooling other people into thinking they're smart, but their behavior shows otherwise.
Take for example Ted Kaczynski, a terrorist who is worshipped like a saint and a prophet in certain ideological spheres. Ted Kaczynski is supposedly this 140IQ genius who saw it all coming and tried to warn us. But if you actually read Industrial Society and Its Future, you can see it's complete incoherent garbage, the kind of stuff I was writing when I was 12 to troll on internet forums. Ted Kaczynski is what a stupid person thinks a smart person looks like.
A smart person doesn't need to be evil, just like a billionaire doesn't need to go shoplifting. I'm not saying that stupid people can't be dangerous. But they should be dealt with for what they are: stupid people, inferior to us, worthy of pity. Not powerful monsters above us that we should fear.
Sam Altman having a Molotov cocktail thrown at his house after Ronan wrote a very long and detailed report of his shady personality isn't just coincidence and likely not organic. Sam needs to be viewed as sympathetic, thank goodness for such a moment where no one was hurt and nothing actually damaged.
With the exception of rappers, most musicians who die early die from overdoses, suicides, and such (the "27 club" <https://en.wikipedia.org/wiki/27_Club>), as opposed to being murdered.
We are somewhat violent species, so I agree that almost every significant economic and societal development has the potential to trigger some violence. That said, the jobs that are potentially threatened by AI are nowadays usually done by fairly sedentary people, so I wouldn't expect any large-scale violence, an occasional Ted Kaczynski notwithstanding. Programmers, translators and painters just aren't used to destroying things in the real world.
It would have been different if AI started to replace drug dealers or the mob.