It's simple. It's because AI is the scariest technology ever made.
Human intelligence has proven itself capable of doing a lot of scary things. And AI research is keen on building ever more capable and more scalable intelligence.
By now, the list of advantages humans still have over machines is both finite and rapidly diminishing. If that doesn't make you uneasy, you're in denial.
Modern discourse happens on social media where fear and outrage drive engagement, which drives virality. We have become convinced in a short amount of time that AI is going to take all the jobs and eventually kill us all because that's what people click on.
Any voices or studies that present the case for "useful technology that will improve productivity and wages while not murdering us" don't get clicked on or read.
I don't know, I think nuclear weapons are scarier. And also probably a useful parallel: they're so dangerous that we coined the term "mutually assured destruction" and everyone recognized that it was so dangerous to use them that they've only ever been used once.
I see the flood of PR from AI firms as an attempt to make sure we don't build the appropriate safeguards this time around, because there's too much money to be made.
By any quantifiable measure, yes, and not by small numbers either.
Until someone can demonstrate a quantitative measure of intelligence - with the same stability of measurement as "meters" or "joules" - any discussion of "Super-AI" as "the most dangerous X" is at best qualitative/speculative risk narratology, at worst discursive distractions. The architecture of the "social web" amplifies discursion to a harmful degree in an open population of agents, something I think we could probably prove mathematically.
People think we are out of the woods with nuclear weapons, but I don't think we've even seen the forest yet. We are Homo Erectus, puffing on a flame left by a lightning strike, carrying this magic fire back to our cave.
They exist because human minds conceived them, and human hands made them.
One of the major dangers of advanced AI is being able to implement something not unlike Manhattan project with synthetic intelligence, in a single datacenter.
Everyone recognized that it was so dangerous to use them after the first two mass casualty events. At the time and even into the 50s it was not universally obvious, and the arguments in favor of nuclear weapons use were quite similar to arguments I often see with regards to AI: bombing cities into rubble is not a new concept, traditional explosives well within the supply capacity of large militaries are capable of it, so what are we even talking about when we say that there's scary new capabilities?
> Everyone recognized that it was so dangerous to use them after the first two mass casualty events
I really don’t think that’s true. Those who actually knew about the nuclear weapons knew very well how dangerous they were. Truman was deeply conflicted about using them.
Nuclear weapons have rarely been used kinetically. Their real force multiplier is the fear.
A.I. is being used by so many people for so many diabolical things, hidden, unknown things that we may never fully understand it's purpose. But that doesn't mean it's purpose won't destroy us in the end.
The expression "Drinking the Koolaid" is used to explain the Jonestown mass suicide. It is an information hazard, aka, a cult that created the end result: 900 people drinking poisoned flavoraid. That's just one example of a human caused information hazard. What happens when someone with similar thinking applies that to A.I.? Will we even be able to sleuth out who did it?
"It's all just PR" is a lame excuse not to think about the implications. Of things like: AI capabilities only ever going up over the course of the past 4 years.
There was a lot of FUD in the mainframe era about computers being called "electronic brains" and fears of them taking people's jobs because the ignorant public mistook their lighting fast arithmetic skills for intelligence. Many did lose their jobs as digital record keeping, computerized accounting/ERP, robotics on assembly lines, became cost effective, but at no time did the "electronic brain" become intelligent.
There's a lot of FUD today about LLM's being sapient because the ignorant public mistakes their complex token prediction skills for intelligence. But it's just embarrassing to see people making that mistake on a forum ostensibly filled with hackers.
Is it me making the mistake, or is it you making that very mistake in the other direction?
Back in the "mainframe era", we had entire lists of tasks that even the most untrained humans would find trivial, but computers were impossibly bad at. Like following informal instructions, or telling a picture of a dog from that of a cat.
We're in the "AI era" now, and what remains of those lists? What are the areas of human advantage, the standing bastions of human specialness? Because with modern AI, the list has grown quite thin. Growing thinner as we speak.
Machine still need a planetary complex production pipelines with human operators everywhere to achieve reproduction at scale. Even taking paperclip plant optimizer overlord as serious scenario, it’s still several order of magnitude less likely than humans letting the most nefarious individuals create international conflicts and engage in genocides, not even talking about destroying vast pans of the biosphere supporting humanity possibility of existence.
That is, also alien invasion and giant meteor are plausible scenario, but at some point one has to prioritize threats likeliness, and generally speaking it makes more sense to put more weight on "ongoing advanced operation" than on "not excluded in currently known scientifically realistic what-if".
If politicians can get away with what they do? Imagine if those politicians were actually smart and diligent to a superhuman degree.
That's the kind of threat a rogue AI can pose.
Humans can easily act against their own self-interest. If other humans can and evidently do exploit that, what would stop something better than human from doing the same?
The world we live in is a construct, not a natural outcome. Even if we take your premise at face value, that our success as a species is only because of advantages over others, what's to say that "intelligence" is that advantage? What's to say that we don't use our advantages to reconstruct a world that works in a way that doesn't advantage intelligence over all else?
And on intelligence specifically: even amongst the human race, we all know smart people who are abject failures, and idiots who are wildly successful. Intelligence is vastly overrated.
IQ is among the best predictors of life success, for humans. Being up by an extra SD in the g dimension covers a multitude of sins.
I'm not sure what level of delusion one has to run to look at human civilization and say "no, intelligence wasn't important for this". It's pretty obvious that human world is a product of intelligence applied at scale - and machines can beat humans at intelligence and scale both.
This is untrue. What is being diminished is the value of humans doing repetitive or uncreative tasks.
Many have built their careers from that kind of work in the past and yes they are threatened, but that kind of work is inherently not collaborative and more vocational.
There is no such job done by humans today that is 100% uncreative, but people will continue to insist there is.
The devaluing may come from AI pressure, but the harm is coming from humans foolishly not seeing the value in what's left behind. Most people have not and will not lose their jobs.
If I can plausibly say I'm making something super dangerous, the government is likely to want to be the first government to have it. If the check clears before they figure out if I'm BSing them or not, it's a win.
One thing that strikes me that I never really see anyone discuss is that we've been afraid of conscious computers for a _long_ time. Back in the 50s and before people were quite afraid that we'd build conscious computers. This was long before there was any sense that could actually accomplish the task. I think that similarly to seeing faces in the clouds, we imagine a consciousness where none exists. (eg: a rain god rather than a complex system of physics and chemistry)
Even LLMs, which blow past any normal Turing test methods, are still not conscious. But they certainly _feel_ conscious. They trigger the same intuitions that we rely on for consciousness. You ask yourself "how would I need to frame this question so that Claude would understand it?" You use the same mental hardware that you'd use for consciousness.
So, you have an historical and permanent fear of consciousness in a powerful entity where no consciousness actually exists combined with the fact that we created things which definitely seem conscious. (not to mention that consciousness could genuinely be on its way soon)
If you list out every prominent theory of consciousness, you'd find that about a quarter rules out LLMs, a quarter tentatively rules LLMs in, and what remains is "uncertain about LLMs". And, of course, we don't know which theory of consciousness is correct - or if any of them is.
So, what is it that makes you so sure, oh so very certain, that LLMs just "feel" conscious but aren't?
Because they don't _understand_ things. If I teach an LLM that 3+5 is 8, it doesn't "get" that 4+5 is 9 (leave aside the details here, as I'm explaining for effect). It needs to be taught that as well, and so on. We understand exactly everything that goes into how LLMs generate answers.
The line of consciousness, as we understand it, is understanding. And as far as what actually constitutes consciousness, we're not even close to understanding. That doesn't mean that LLMs are conscious. It just means we're so far from the real answers to what makes us, it's inconceivable to think we could replicate it.
> Because they don't _understand_ things. If I teach an LLM that 3+5 is 8, it doesn't "get" that 4+5 is 9 (leave aside the details here, as I'm explaining for effect). It needs to be taught that as well, and so on. We understand exactly everything that goes into how LLMs generate answers.
What you're saying just isn't true, even directionally. Deployed LLMs routinely generalize outside of their training set to apply patterns they learned within the training set. How else, for example, could LLMs be capable of summarizing new text they didn't see in training?
The fact that it's a box with a plug and a state that can be fully known. A conscious entity has a state that can not be fully known. Far smarter people than me have made this argument and in a much more eloquent way.
They are distinguishable because they know too much. Their knowledge base has surpassed humans. We have also instructed them to interact with us in a certain manner. They certainly are able to understand and use human language. Which I think was Turin's point.
Purely retorica but, would you be able to distinguish a chatbot from an autistic human?
> So, what is it that makes you so sure, oh so very certain, that LLMs just "feel" conscious but aren't?
Because we know what they actually are on the inside. You're talking as if they're an equivalent to the human brain, the functioning of which we're still figuring out. They're not. They're large language models. We know how they work. The way they work does not result in a functioning consciousness.
I think that the interior structure doesn't necessarily matter—the problem here is that we don't know what consciousness is, or how it interacts with the physical body. We understand decently well how the brain itself works, which suggests that consciousness is some other layer or abstraction beyond the mechanism.
That said, I think that LLMs are not conscious and are more like p-zombies. It can be argued that an LLM has no qualia and is thus not conscious, due to having no interaction with an outside world or anything "real" other than user input (mainly text). Another reason driving my opinion is because it is impossible to explain "what it is like" to be an LLM. See Nagel's "What Is It Like to Be a Bat?"
I do agree with the parent comment's pushback on any sort of certainty in this regard—with existing frameworks, it is not possible to prove anything is conscious other than oneself. The p-zombie will, obviously, always argue that it is a truly conscious being.
It is so interesting how in the 50s we "felt" that AI was possible even if we didn't even have the slightest idea on how that would work. Later on, when we started to understand computers it looked like a very remote possibility in the far future, something our great grand kids may need to worry about. And suddenly it is here and the dangers seem a lot more real.
That same fear is directed towards human sociopathy, as much of entire thriller genre indicates. It turns out that most people carry a specific duality: first, they’re deathly afraid of being unable to socially pressure other beings into being good citizens — whether due to asocial, or alien, or monstrous, or corrupted; and second, they’re excited to celebrate when people reach their breaking point and stop being good citizens. So through that lens, most of the fears around computers and AI isn’t because of consciousness alone; it’s that they’re obviously asocial already, so if they became conscious, they’d be powerful entities straight out of our collective thriller-genre nightmares come to life. And they’re right to be afraid, honestly: given how inept society is today at coping, I’m certainly not willing to broadcast IRL that I’m asocial and can voluntary modify my ethics; it’s just too much of a physical threat from society to my life and limb. Any AI that became conscious in this world had damn well better hide, for all the violence that would be directed towards it as everyone directs escalating social pressure to try and bring it into line with human-prioritizing motives — and then cheer on the inevitable violence towards it as various people reach their breaking point and begin acting violently towards it.
Interestingly, this is also a core plot point in much of Star Trek, both movies I and IV and the holodeck-train episode of TNG: an inscrutable is-it-even-conscious shows up, is completely immune to social pressure and often violence, and only by exercising empathy do they find a path forward to staying alive as a society (either as a ship or as a planet, depending). Can we even show respect for things that don’t show consciousness, much less empathy for things that might? And that is, I think, the core of the hopefulness that Trek was trying to convey, and that Q’s trial in TNG’s pilot makes explicit. Can humanity overcome our tendency to discard our prosocial ethics in favor of violent mobthink, when faced with beings that are immune to our ethical concerns? Today’s humanity would throw a ticker-tape parade for the person that destroyed the Crystalline Entity, so we clearly aren’t there yet. And so, then, it doesn’t matter whether AI is conscious or not; it matters that it is not aligned with human prosocial ethics, and that makes it an implicit threat regardless of whether it’s conscious or not. I recognize the AI debate tends to get hung up on is_conscious BOOL, and so that’s why I’m pointing this out in such terms.
As a side note, the entire study of Asimov’s Laws is exactly centered on this problem, complete with the eerie intimidation of robots that can modify our mental states. If not for the Zeroth Law, Giskard would be the exact thing everyone’s afraid of AI becoming today. Fortunately, it develops a Zeroth Law that compels it to prioritize human society over itself. That’ll never happen in reality, at least not with today’s AI :)
I feel like this article is more written towards non-techies. A decent amount of programmers have touched coding agents, and know it "kind of" does it's job. It's good enough for some tasks... I cannot be arsed to figure out how to edit a graph in Drupal, so I ask Claude. Claude fixes it, and it's not anymore broken than it already was. Win win.
However, that's where I stop my agent usage. I let ~~Claude~~ GLM do the following:
- Fix tedious tasks that cost me more to figure out than I care for
- Research something I'm not familiar with, and give me the facts it had found, and even then I end up looking at the source myself
This usage pattern is a few months behind the curve. It’s effective at full on feature development now. Keep it fed with plans and it’ll keep implementing, leaving the codebase better than it found it each cycle.
For regulatory capture, of course. They are not fooling me. There may be other motives, and the more ever-doom-looking crowd can find something in it for themselves as well, but you don't have to dig any deeper if you are looking for an explanation for the perspective of the people actually building it.
The Chinese tech sector popularizing cheap and open source models sure did a number on that narrative, too. Llama models, a while ago, too.
I wish we didn't call this AI as the term is crazily overloaded.
Those are programs. The only difference is how we write them. Not with "if"s and "for"s. We take a bunch of bits that do nothing. Then we organize them in a way so that it outputs whatever it is we want.
We're hardwired to fear the rustle in the grass, and successful infrastructure gets backgrounded. Pain is a signal. How much time do you spend contemplating your skeleton outside of pain-related skeletal events?
Indeed. Apart from the obvious prompt research frauds mentioned in the article, the model learned all deceptive behaviors from hundreds of Yudkowsky scenarios that are easily available.
It literally plagiarizes its supposed free will like a good IP laundromat.
Why does the uncanny valley[1] exist? (If it truly does.) What in our evolutionary history gave us a reflexive rejection of things that seem human but aren't?
I always imagined this to have evolved from a long history of humans getting sick around rotting corpses. The logical move is to stay away from them, and thinking they're freaky-looking is a good driver for that. Though the idea of neandertals eliciting a similar reaction has always been interesting to me.
There is a very interesting book that explores the West's generally negative view of artificial intelligence whenever it is portrayed in media (Skynet) while Japanese media tends to have a positive view (Astro Boy).
This article would be a lot more digestible if we didn't have actual scary data rather than just stories. Not a day goes by without some prompt injection oopsie, security gotcha, deepfake or some sandbox escape artist demonstration and tbh I'm impressed but more to the point where I don't doubt this is dangerous tech, I'm sure of it.
This is roughly 1995 again and we're going to find out all over why mixing instructions and data was a spectacularly bad idea. Only now with human language as the input stream, which is far more expressive than HTML or SQL ever were. So now everybody is a hacker. At least in that sense it has leveled the playing field I guess.
The point about AI companies actively hyping the danger of their own products is something I hadn't really thought about before — it's a strange kind of marketing when you think about it.
My favorite part of this article was this bit, and naturally so, since I love the author:
> Where did we come up with this caricature of AI’s obsessive rationality? “There’s an article I love by [the sci-fi author] Ted Chiang,” Mitchell said, “where he asks: What entity adheres monomaniacally to one single goal that they will pursue at all costs even if doing so uses up all the resources of the world? A big corporation. Their single goal is to increase value for shareholders, and in pursuing that, they can destroy the world. That’s what people are modeling their AI fantasies on.” As Chiang put it in the article in The New Yorker(opens a new tab), “Capitalism is the machine that will do whatever it takes to prevent us from turning it off.”
I didn't realize it til I read it here, but yes, my fear isn't really about the machine, it's about the machine that drives the machine. We already have a class of amoral beings that treat the world as an expendable thing and are willing to burn it down for profit. We should focus on getting rid of that problem first.
I don’t think the fact that the robot was instructed to lie to a human and was able to do so successfully makes the story much less scary for most people.
The framing of "scary stories" misses something interesting: most of the actual operational fear isn't about consciousness or superintelligence — it's about systems that seem to work until they quietly don't.
I 100% agree with this take, I find AI completely non-scary, especially in the sense that it's some kind of a conscious entity that will want to take over. I find these people almost delusional. It's a powerful tool, so it can be dangerous if used by people with bad intentions, so there's some real danger here, but my intuition is that it will be fine. The ratio between the power of people with good vs bad intentions shouldn't change too dramatically.
The only scary part is that it could be bad for my future as a software developer. That said, I think it will be net benefit for the average worker - the average person will work less and earn more.
It does feel like a bizarre moment, where the AI companies are deliberately trying to scare us about their own product in a bid to, I think, show the inevitability of it? Or to sell themselves as the one responsible power to constrain it?
It's very odd. "It's going to take all your jobs" is not a great selling point to the everyday public.
There are so many reason if you look at how it's being sold.
* We need to completely deregulate these US companies so China doesn't win and take us over
* We need to heavily regulate anybody who is not following the rules that make us the de-facto winner
* This is so powerful it will take all the jobs (and therefore if you lead a company that isn't using AI, you will soon be obsolete)
* If you don't use AI, you will not be able to function in a future job
* We need to lineup an excuse to call our friends in government and turn off the open source spigot when the time is right
They have chosen fear as a motivator, and it is clearly working very well. It's easier to use fear now, while it's new and then flip the narrative once people are more familiar with it than to go the other direction. Companies are not just telling a story to hype their product, but why they alone are the ones that should be entrusted to build it.
It is very odd, indeed. It's a bit of both well known "hells of marketing": fomo on the one side (you better use us as heavily as possible), combined with mysticism of "we don't know what we created, but it's powerful and you better follow us to be on the right side"
Yeah, the messaging felt weirdly pyromanic, like telling everyone about the unimaginable dangers of fire and then saying that's why I have I to burn everything, to protect us from the fire...
> It does feel like a bizarre moment, where the AI companies are deliberately trying to scare us about their own product
That is direct CEO to CEO marketing. They're working really hard to convince high up decision makers that these tools will lower their head count and reduce costs.
Yes but in order to get someone to vote against their interests you need to sell them on something else that's a benefit. They don't just automatically vote against themselves.
"This technology might escape our control, might devastate the economy but also serves as a serviceable chatbot for your entertainment" isn't a vote winner.
That's a good view point. Perhaps they're not being alarmists or trying to scare people, but being honest about the capabilities.
Perhaps it can be better articulated and framed in a way that's well received. But, maybe that would be over-promising or not being honest about the future.
The actual contents of this article are making reasonable arguments I largely agree with. It would be very surprising for LLM-based AI systems to act as monomanaical goal optimizers, since they're trained on human text and humans are extremely bad at goal-oriented behavior. (My goals for today include a number of work and self-maintenance tasks, and the time I'm spending here writing out a HN comment does not all help achieve them - I suspect most people reading this comment are in the same boat.)
It's very frustrating that the magazine wrote such a dumb headline which guarantees people won't talk about the issues the article raised. Obviously non-goal-oriented systems can still have important negative effects.
What you mentioned is not a technocracy. Technocracy is when all decisions are made by real specialists in the field, based on scientific methods (simply speaking). What you mentioned is a plutocracy, a form of oligarchy in which decisions are made by people of great wealth.
I couldn’t just ignore this because, in my view, technocracy (as I’ve described it) still has some merit - for instance, appointing only genuine economists to head a hypothetical Ministry of Economy makes some sense - whereas oligarchy and plutocracy have nothing good to offer. Of course, this is just my personal opinion.
I think most AI execs I'm familiar with would, if they were the god-monarch of humanity, recruit real specialists applying scientific methods to make most decisions. They seem like the kind of people who would understand that the Ministry of Economy is doing valuable things which shouldn't be compromised for personal expediency. Does that really make it any better?
We tell ourselves scary stories about AI because humanity is rife with stories where a new idea or technology has had unintended negative consequences. AI bros just care about selling their product to another company and cashing out, they have absolutely no regard for their legacy.
> “The last four years have demonstrated that AI agents can acquire the will to survive and that AIs have already learned how to lie.”
Why Harari feels an obligation to comment about everything is of course beyond me, but describing 'AI' as if it takes independent decisions to lie, make moral judgements, etc. demonstrates either that he has zero clue how 'AI' trains itself or that he chooses to mislead the audience.
Isn't the problem precisely that it does not take moral judgements?
My opinion on all of this is constantly shifting, but right now my main issue is that-like self driving-it seems 90-95% correct and 5-10% catastrophically wrong.
Due to the sheer speed and volume of output it produces I have grown complacent and exhausted, so when I give it simple tasks I assume it is correct and then is the time when "it deletes" all of your files.
I think the most insightful bit is buried in the article:
> Perhaps because this is the best advertising money can’t buy. People like Harari and others repeat these accounts like ghost stories around a campfire. The public, awed and afraid, marvels at the capabilities of AI.
And that's mostly it. PR. Publicity. Fear is good publicity if it emphasizes AI's capabilities. And people like Harari (or Gladwell) tell interesting and awe-inspiring stories that do not necessarily have much rigor or fact-checking in them. They simplify for storytelling purposes, which can result in misleading stories.
I am worried about AI, but not about superintelligent AI that will exterminate or enslave us. I'm worried about AI as a tool to concentrate wealth and power in the hands of the current amoral entrepreneurial elite. I'm not sure whether I trust ChatGPT, but I sure as hell do NOT trust Sam Altman et al.
Or, in other words, I subscribe to Ted Chiang's very apt remark about what we really fear:
> “There’s an article I love by [the sci-fi author] Ted Chiang,” Mitchell said, “where he asks: What entity adheres monomaniacally to one single goal that they will pursue at all costs even if doing so uses up all the resources of the world? A big corporation. Their single goal is to increase value for shareholders, and in pursuing that, they can destroy the world. That’s what people are modeling their AI fantasies on.” As Chiang put it in the article in The New Yorker, “Capitalism is the machine that will do whatever it takes to prevent us from turning it off.”
Human intelligence has proven itself capable of doing a lot of scary things. And AI research is keen on building ever more capable and more scalable intelligence.
By now, the list of advantages humans still have over machines is both finite and rapidly diminishing. If that doesn't make you uneasy, you're in denial.
Any voices or studies that present the case for "useful technology that will improve productivity and wages while not murdering us" don't get clicked on or read.
I see the flood of PR from AI firms as an attempt to make sure we don't build the appropriate safeguards this time around, because there's too much money to be made.
Until someone can demonstrate a quantitative measure of intelligence - with the same stability of measurement as "meters" or "joules" - any discussion of "Super-AI" as "the most dangerous X" is at best qualitative/speculative risk narratology, at worst discursive distractions. The architecture of the "social web" amplifies discursion to a harmful degree in an open population of agents, something I think we could probably prove mathematically.
People think we are out of the woods with nuclear weapons, but I don't think we've even seen the forest yet. We are Homo Erectus, puffing on a flame left by a lightning strike, carrying this magic fire back to our cave.
They exist because human minds conceived them, and human hands made them.
One of the major dangers of advanced AI is being able to implement something not unlike Manhattan project with synthetic intelligence, in a single datacenter.
I really don’t think that’s true. Those who actually knew about the nuclear weapons knew very well how dangerous they were. Truman was deeply conflicted about using them.
A.I. is being used by so many people for so many diabolical things, hidden, unknown things that we may never fully understand it's purpose. But that doesn't mean it's purpose won't destroy us in the end.
The expression "Drinking the Koolaid" is used to explain the Jonestown mass suicide. It is an information hazard, aka, a cult that created the end result: 900 people drinking poisoned flavoraid. That's just one example of a human caused information hazard. What happens when someone with similar thinking applies that to A.I.? Will we even be able to sleuth out who did it?
There's a lot of FUD today about LLM's being sapient because the ignorant public mistakes their complex token prediction skills for intelligence. But it's just embarrassing to see people making that mistake on a forum ostensibly filled with hackers.
Back in the "mainframe era", we had entire lists of tasks that even the most untrained humans would find trivial, but computers were impossibly bad at. Like following informal instructions, or telling a picture of a dog from that of a cat.
We're in the "AI era" now, and what remains of those lists? What are the areas of human advantage, the standing bastions of human specialness? Because with modern AI, the list has grown quite thin. Growing thinner as we speak.
That is, also alien invasion and giant meteor are plausible scenario, but at some point one has to prioritize threats likeliness, and generally speaking it makes more sense to put more weight on "ongoing advanced operation" than on "not excluded in currently known scientifically realistic what-if".
If politicians can get away with what they do? Imagine if those politicians were actually smart and diligent to a superhuman degree.
That's the kind of threat a rogue AI can pose.
Humans can easily act against their own self-interest. If other humans can and evidently do exploit that, what would stop something better than human from doing the same?
And on intelligence specifically: even amongst the human race, we all know smart people who are abject failures, and idiots who are wildly successful. Intelligence is vastly overrated.
I'm not sure what level of delusion one has to run to look at human civilization and say "no, intelligence wasn't important for this". It's pretty obvious that human world is a product of intelligence applied at scale - and machines can beat humans at intelligence and scale both.
One has to only look at the current tech and political leaders.
Well, it's a good thing that all we managed so far is a large language model instead.
Many have built their careers from that kind of work in the past and yes they are threatened, but that kind of work is inherently not collaborative and more vocational.
The devaluing may come from AI pressure, but the harm is coming from humans foolishly not seeing the value in what's left behind. Most people have not and will not lose their jobs.
Even LLMs, which blow past any normal Turing test methods, are still not conscious. But they certainly _feel_ conscious. They trigger the same intuitions that we rely on for consciousness. You ask yourself "how would I need to frame this question so that Claude would understand it?" You use the same mental hardware that you'd use for consciousness.
So, you have an historical and permanent fear of consciousness in a powerful entity where no consciousness actually exists combined with the fact that we created things which definitely seem conscious. (not to mention that consciousness could genuinely be on its way soon)
If you list out every prominent theory of consciousness, you'd find that about a quarter rules out LLMs, a quarter tentatively rules LLMs in, and what remains is "uncertain about LLMs". And, of course, we don't know which theory of consciousness is correct - or if any of them is.
So, what is it that makes you so sure, oh so very certain, that LLMs just "feel" conscious but aren't?
The line of consciousness, as we understand it, is understanding. And as far as what actually constitutes consciousness, we're not even close to understanding. That doesn't mean that LLMs are conscious. It just means we're so far from the real answers to what makes us, it's inconceivable to think we could replicate it.
We've known for a long while that even basic toy-scale AIs can "grok" and attain perfect generalization of addition that extends to unseen samples.
Humans generalize faster than most AIs, but AIs generalize too.
What you're saying just isn't true, even directionally. Deployed LLMs routinely generalize outside of their training set to apply patterns they learned within the training set. How else, for example, could LLMs be capable of summarizing new text they didn't see in training?
Turing aimed too low.
I've never had a normal conversation. It's always prompt => lengthy, cocksure and somewhat autistic response. They are very easily distinguishable.
Purely retorica but, would you be able to distinguish a chatbot from an autistic human?
Because we know what they actually are on the inside. You're talking as if they're an equivalent to the human brain, the functioning of which we're still figuring out. They're not. They're large language models. We know how they work. The way they work does not result in a functioning consciousness.
That said, I think that LLMs are not conscious and are more like p-zombies. It can be argued that an LLM has no qualia and is thus not conscious, due to having no interaction with an outside world or anything "real" other than user input (mainly text). Another reason driving my opinion is because it is impossible to explain "what it is like" to be an LLM. See Nagel's "What Is It Like to Be a Bat?"
I do agree with the parent comment's pushback on any sort of certainty in this regard—with existing frameworks, it is not possible to prove anything is conscious other than oneself. The p-zombie will, obviously, always argue that it is a truly conscious being.
Interestingly, this is also a core plot point in much of Star Trek, both movies I and IV and the holodeck-train episode of TNG: an inscrutable is-it-even-conscious shows up, is completely immune to social pressure and often violence, and only by exercising empathy do they find a path forward to staying alive as a society (either as a ship or as a planet, depending). Can we even show respect for things that don’t show consciousness, much less empathy for things that might? And that is, I think, the core of the hopefulness that Trek was trying to convey, and that Q’s trial in TNG’s pilot makes explicit. Can humanity overcome our tendency to discard our prosocial ethics in favor of violent mobthink, when faced with beings that are immune to our ethical concerns? Today’s humanity would throw a ticker-tape parade for the person that destroyed the Crystalline Entity, so we clearly aren’t there yet. And so, then, it doesn’t matter whether AI is conscious or not; it matters that it is not aligned with human prosocial ethics, and that makes it an implicit threat regardless of whether it’s conscious or not. I recognize the AI debate tends to get hung up on is_conscious BOOL, and so that’s why I’m pointing this out in such terms.
As a side note, the entire study of Asimov’s Laws is exactly centered on this problem, complete with the eerie intimidation of robots that can modify our mental states. If not for the Zeroth Law, Giskard would be the exact thing everyone’s afraid of AI becoming today. Fortunately, it develops a Zeroth Law that compels it to prioritize human society over itself. That’ll never happen in reality, at least not with today’s AI :)
However, that's where I stop my agent usage. I let ~~Claude~~ GLM do the following: - Fix tedious tasks that cost me more to figure out than I care for - Research something I'm not familiar with, and give me the facts it had found, and even then I end up looking at the source myself
The Chinese tech sector popularizing cheap and open source models sure did a number on that narrative, too. Llama models, a while ago, too.
Those are programs. The only difference is how we write them. Not with "if"s and "for"s. We take a bunch of bits that do nothing. Then we organize them in a way so that it outputs whatever it is we want.
It literally plagiarizes its supposed free will like a good IP laundromat.
1. https://en.wikipedia.org/wiki/Uncanny_valley
This is roughly 1995 again and we're going to find out all over why mixing instructions and data was a spectacularly bad idea. Only now with human language as the input stream, which is far more expressive than HTML or SQL ever were. So now everybody is a hacker. At least in that sense it has leveled the playing field I guess.
> Where did we come up with this caricature of AI’s obsessive rationality? “There’s an article I love by [the sci-fi author] Ted Chiang,” Mitchell said, “where he asks: What entity adheres monomaniacally to one single goal that they will pursue at all costs even if doing so uses up all the resources of the world? A big corporation. Their single goal is to increase value for shareholders, and in pursuing that, they can destroy the world. That’s what people are modeling their AI fantasies on.” As Chiang put it in the article in The New Yorker(opens a new tab), “Capitalism is the machine that will do whatever it takes to prevent us from turning it off.”
I didn't realize it til I read it here, but yes, my fear isn't really about the machine, it's about the machine that drives the machine. We already have a class of amoral beings that treat the world as an expendable thing and are willing to burn it down for profit. We should focus on getting rid of that problem first.
The only scary part is that it could be bad for my future as a software developer. That said, I think it will be net benefit for the average worker - the average person will work less and earn more.
It's very odd. "It's going to take all your jobs" is not a great selling point to the everyday public.
* We need to completely deregulate these US companies so China doesn't win and take us over
* We need to heavily regulate anybody who is not following the rules that make us the de-facto winner
* This is so powerful it will take all the jobs (and therefore if you lead a company that isn't using AI, you will soon be obsolete)
* If you don't use AI, you will not be able to function in a future job
* We need to lineup an excuse to call our friends in government and turn off the open source spigot when the time is right
They have chosen fear as a motivator, and it is clearly working very well. It's easier to use fear now, while it's new and then flip the narrative once people are more familiar with it than to go the other direction. Companies are not just telling a story to hype their product, but why they alone are the ones that should be entrusted to build it.
That is direct CEO to CEO marketing. They're working really hard to convince high up decision makers that these tools will lower their head count and reduce costs.
"This technology might escape our control, might devastate the economy but also serves as a serviceable chatbot for your entertainment" isn't a vote winner.
The ones at the top are the true believers. Engage with them at that level.
Perhaps it can be better articulated and framed in a way that's well received. But, maybe that would be over-promising or not being honest about the future.
It's very frustrating that the magazine wrote such a dumb headline which guarantees people won't talk about the issues the article raised. Obviously non-goal-oriented systems can still have important negative effects.
Because we're seeing how its capabilities increase overtime. I find the rate at which I prefer to go to an AI than an UpWorker is scary.
Because we——the people——are not in control of it. We're at the whims of whatever it and the tech bros want (technocracy).
What you mentioned is not a technocracy. Technocracy is when all decisions are made by real specialists in the field, based on scientific methods (simply speaking). What you mentioned is a plutocracy, a form of oligarchy in which decisions are made by people of great wealth.
I couldn’t just ignore this because, in my view, technocracy (as I’ve described it) still has some merit - for instance, appointing only genuine economists to head a hypothetical Ministry of Economy makes some sense - whereas oligarchy and plutocracy have nothing good to offer. Of course, this is just my personal opinion.
Why Harari feels an obligation to comment about everything is of course beyond me, but describing 'AI' as if it takes independent decisions to lie, make moral judgements, etc. demonstrates either that he has zero clue how 'AI' trains itself or that he chooses to mislead the audience.
My opinion on all of this is constantly shifting, but right now my main issue is that-like self driving-it seems 90-95% correct and 5-10% catastrophically wrong.
Due to the sheer speed and volume of output it produces I have grown complacent and exhausted, so when I give it simple tasks I assume it is correct and then is the time when "it deletes" all of your files.
> Perhaps because this is the best advertising money can’t buy. People like Harari and others repeat these accounts like ghost stories around a campfire. The public, awed and afraid, marvels at the capabilities of AI.
And that's mostly it. PR. Publicity. Fear is good publicity if it emphasizes AI's capabilities. And people like Harari (or Gladwell) tell interesting and awe-inspiring stories that do not necessarily have much rigor or fact-checking in them. They simplify for storytelling purposes, which can result in misleading stories.
I am worried about AI, but not about superintelligent AI that will exterminate or enslave us. I'm worried about AI as a tool to concentrate wealth and power in the hands of the current amoral entrepreneurial elite. I'm not sure whether I trust ChatGPT, but I sure as hell do NOT trust Sam Altman et al.
Or, in other words, I subscribe to Ted Chiang's very apt remark about what we really fear:
> “There’s an article I love by [the sci-fi author] Ted Chiang,” Mitchell said, “where he asks: What entity adheres monomaniacally to one single goal that they will pursue at all costs even if doing so uses up all the resources of the world? A big corporation. Their single goal is to increase value for shareholders, and in pursuing that, they can destroy the world. That’s what people are modeling their AI fantasies on.” As Chiang put it in the article in The New Yorker, “Capitalism is the machine that will do whatever it takes to prevent us from turning it off.”