Actually it shouldn't be too hard, just a cardboard cutout with a pullstring which, when pulled, intones "we're really sorry about this and it will never happen again, I promise".
I work at one of the big 3 hyperscalers, and of all the things my company is trying to use AI to replace, meetings, calendar management, emails are NOT one of them.
There have been too many high level conversations that can be summarized as:
"If I send you an email, and an AI responds, I do not want to work with you."
Communication at work requires a reasonable boundary about what it means to have a professional relationship with another human being. You chose to work with a person because you trust their judgment, their word, their ability to commit to something in conversation and follow through. An AI clone can't commit to anything. It can't be held to what it said. It can't own a decision.
When you email someone, you are asking to talk to that person. When you sit in a meeting with someone, you are expecting that person's attention and judgment. Especially with business to business, the relationship is the product. There is no relationship with AI.
Respect and social contracts aside, right now, AI does not have the general intelligence to perform the executive function needed to find productivity and connect the dots, soft skill, stakeholder alignment etc.
You can thank Marc Andreessen whose world view is apparently limited solely to "Snowden is a traitor". Thank god someone like him has fuck you money and can shape the world to his desires
(And I've heard he stole Mosaic code, which I don't know if it's true but would be consistent)
It is odd when people try to put Notch on the level of someone like Carmack. Like because the guy made a billion dollars that means his opinion should be highly valued in perpetuity. He just seems like a fairly average game dev that lucked his way into making Lego 2.
The game being successful wasn't luck but it being as successful as it was definately was. Block based games had existed for years before minecraft, I don't think there was any reason to believe that this one in particular was going to explode in 2010.
I don’t know much about Notch so I have no comment on them specifically, but “building the company and brand for years” can still count as “luck once”. Once you strike gold and make it big, you have a lot of leeway for mistakes, it becomes harder to fail. Case in point, Zuckerberg.
Instagram and WhatsApp were far from bets. Both were growing exponentially. Both were serious threats to Facebook's money machine.
Facebook used their lucky cash cow money to eliminate the threat, by offering way more than any reasonable market value. Absolutely crazy money. An offer they couldn't resist, especially the investors.
No bet, just an ice cold rational calculation. Anyone wealthy and crazy enough could have done the same. No skills needed, no risks involved. It's like betting tomorrow will be another day.
They don’t need to be “sure things”, only threats. Once you buy the threat, whether it continues to thrive or withers is largely immaterial, the threat is gone.
It does take tremendous talent and knowledge to build what he built, saying anything else is not just dishonest but weirdly cynical if not hateful. I think you grossly underestimate how difficult it is to build anything good. Not an easy thing to follow up all the expectations set after a massive global hit like that, a classic dilemma of a successful artist.
There's a level of talent necessary to be a one-hit-wonder, yes. But I think what makes someone a one-hit-wonder and not a career star is that their success was driven by luck more than by talent. In this case "luck" is being in the right place at the right time: that nexus where cultural and technological timing is perfect for a phenomenon to appear and take hold.
It really doesn't. We teach kids how to program basic games like minecraft. The real appeal was and always has been the fully destructible Lego-like world bundled with good-enough online multiplayer. I was there when it started and he posted his questions/progress on the forums I frequented back then. Those posts where the starting ramp and testing ground that got him his first few users which then snowballed.
There is neither knowledge nor talent here, he stumbled into it, and that's fine.
Considering he reportedly paid top FB people "billions", you would hope they would be doing more than building a Mini Me. Is this the modern Stalin statue?
Hilarious that at one end of the AI world Anthropic is doing exit interviews with their models, while at the other end Zuck is trying to create a digital twin and trap them in an eternal work prison, Severance style. 2 groups having starting with the thought that today's models are getting more person-like and going in complete opposite directions with that.
I wonder how it works in his mind's eye. Does it make decisions? Does it dispense little Zuckian wisdom proverbs? Does it try to become your friend?
I can understand the appeal; being able to be "present" without the time cost can mean (possibly significantly more) presence at the same cost. This could be very attractive especially to those managing personal relations, like sales representatives.
But I'm surprised that the risks seem to be so underestimated.
Once this clone exists, what happens if it gets out into the wild? Imagine everyone having full access do what is effectively a digital model of your personality. Imagine your competition putting your own model to use against you.
And the better the approximation of this model, the worse the damage to yourself.
Reading it I mean. The commenter putting into words why exactly someone would think that this would be a good idea.
Of course, you're 110% right that it isn't, but it's still nice that HN provides some subtiles for those that are out of the loop and out of substances in their bloodstream.
Very ironic for the billionaire to be openly replacing himself with AI, I suppose he believes his job is easy enough that an LLM can do it, so we definitely don't need him
Yes, exactly. Anyone training a model to replace themselves, is replacing themselves -- with something that can run 24/7 and can easily scale. And the better the model, the easier to replace.
Hence why I'm so surprised that MZ, of all people, is arguing in this direction.
I would think that the potential for malicious abuse alone should have scared him off of this.
> Imagine your competition putting your own model to use against you.
I imagine that this is part of the original plan. “Okay, we wasted 80 billion dollars on VR, and that hurts. But if we can somehow to convince all of our competitors to also waste 80 billion dollars each, then it’ll even out. How can we trick our competitors into thinking more like Zuckerberg?”
But you still get a lot of "shareholder responsibility" comments. Imagine a company that dumps sewage into a river (be that literal or metaphorical). Internet people come around to tell you this is the nature of capitalism and shareholder structure means (increasing?) return on investment is critical and so CEOs have to spend all their waking hours having to juggle this
Am I arguing against this? I don't know - I'm not an economist. But I would like to point out there is such a thing as shareholder fraud and the venn diagram between "sacrifice quality to please shareholders" and "deceiving shareholders" has to be one big intersecting circle, you know? Especially when the guy (Zuckerberg with dual-class shares) can't ever be fired
These people are certifiable and have too much money to misallocate on nonsense. This is like Gavin Belson's holographic avatar (which of course did not work).
Seeing Zuck's "swag" makeover, down to the gold chain and Justin Timberlake curly coiff, I'd say the analogy should be Russ Hanneman's 100-foot Coachella hologram.
There was an old Soviet cartoon about a child who found a box containing two magical servants and immediately asked them for ice cream and sweets. Well, since the servants "do everything for you", the first servant fetched the sweets for him, and the second one ate them for him. I've often thought about this cartoon since the AI thing started.
Back in the 1980s, some Japanese companies had rooms in which you could whack at an effigy of the boss with a shinai. Just to let off a bit of steam. Will Meta's workers be able to do something like this with Zuckerberg's AI clone?
The FT piece says "They added that the character was being trained on the billionaire’s mannerisms, tone and publicly available statements, as well as his own recent thinking on company strategies, so that employees might feel more connected to the founder through interactions with it."
Surely the more likely outcome is that employees feel less connected to "the founder" because they know that there's a high chance they are simply talking to an AI clone?
Yes. People don’t always frame it as “ooh, if only I could meet Mark Zuckerberg”, but most people IME are at least a little wistful about the kind of company where you’re on friendly terms with your CEO.
Is this a meaningful replacement for that? Probably not, but I’m not prepared to rule it out. Give 1 in 1000 Claudes a Zuckerberg persona and you’d get some chuckles out of it I bet.
What happens when Zuck is EOL? Does he transfer his Meta shares to a trust owned by the AI clone? Does that mean that we will have to deal with Zuck for literally forever??
> Meta CEO Mark Zuckerberg could soon have an AI clone of himself to interact with and provide feedback to employees, according to a report from the Financial Times.
If you're the type of person who checks the comments on a post with this kind of headline, then you probably also want to (re-)watch the 2 minute highlight reel of Mark's backyard meat-smoking party. https://www.youtube.com/watch?v=eBxTEoseZak
For artificial intelligence to replace oneself, it would need a digital copy of one's way of thinking. I believe this is impossible to implement with current AI.
Zuckerberg has unique power among CEOs in public companies. He controls the board and he owns a majority of voting shares.
Sure they can theoretically sue him for some kind of gross mismanagement of the company or disloyalty, but why would the owner class do that? Investors are all in on AI replacing human workers. If they think Zuckerberg doing this is wrong, they would imply AI should not work in place of humans.
> they can theoretically sue him for some kind of gross mismanagement of the company or disloyalty
They can really only sue for breach of fiduciary duty. Zuckerberg controls the majority, but there are still limits on abusing the minority. I’m not sure making an AI clone falls afoul of any rules.
The role of a CEO is basically to make tough decisions, not really to be some sort of friendly face in meetings.
If the AI clone is not empowered to make decisions on Zuck's behalf, what's the point? If it is empowered to make decisions on his behalf, who is accountable for bad decisions it makes?
There have been too many high level conversations that can be summarized as: "If I send you an email, and an AI responds, I do not want to work with you."
Communication at work requires a reasonable boundary about what it means to have a professional relationship with another human being. You chose to work with a person because you trust their judgment, their word, their ability to commit to something in conversation and follow through. An AI clone can't commit to anything. It can't be held to what it said. It can't own a decision.
When you email someone, you are asking to talk to that person. When you sit in a meeting with someone, you are expecting that person's attention and judgment. Especially with business to business, the relationship is the product. There is no relationship with AI.
Respect and social contracts aside, right now, AI does not have the general intelligence to perform the executive function needed to find productivity and connect the dots, soft skill, stakeholder alignment etc.
(And I've heard he stole Mosaic code, which I don't know if it's true but would be consistent)
“Besides”? Being rich isn’t a redeeming quality, I’d say. It’s not even a quality in general.
The shape of his head might be interesting to study, though. Not in a phrenology sense, just anthropological, I’d be curious to see his skull.
https://web.archive.org/web/20191221082346/http://ludumdare....
https://web.archive.org/web/20210722173354/https://www.youtu...
https://www.youtube.com/playlist?list=PLgAujBKarXXoMxJDyi1Am...
Facebook used their lucky cash cow money to eliminate the threat, by offering way more than any reasonable market value. Absolutely crazy money. An offer they couldn't resist, especially the investors.
No bet, just an ice cold rational calculation. Anyone wealthy and crazy enough could have done the same. No skills needed, no risks involved. It's like betting tomorrow will be another day.
He’s had multiple hits as an savvy acquirer.
There is neither knowledge nor talent here, he stumbled into it, and that's fine.
I wonder how it works in his mind's eye. Does it make decisions? Does it dispense little Zuckian wisdom proverbs? Does it try to become your friend?
Edit: RE Anthropic haha: https://news.ycombinator.com/item?id=47750086
It might be elimination of whole tiers of management. It will be AI's all the way down.
But I'm surprised that the risks seem to be so underestimated.
Once this clone exists, what happens if it gets out into the wild? Imagine everyone having full access do what is effectively a digital model of your personality. Imagine your competition putting your own model to use against you.
And the better the approximation of this model, the worse the damage to yourself.
This is magical thinking. "Presence" and "time cost" are inextricably linked. You can't have one without the other.
When you use AI to decouple them, you're telling your audience/colleagues/attend that you want them to listen to you but not the other way around.
But it was helpful to me!
Reading it I mean. The commenter putting into words why exactly someone would think that this would be a good idea.
Of course, you're 110% right that it isn't, but it's still nice that HN provides some subtiles for those that are out of the loop and out of substances in their bloodstream.
Hence why I'm so surprised that MZ, of all people, is arguing in this direction.
I would think that the potential for malicious abuse alone should have scared him off of this.
I imagine that this is part of the original plan. “Okay, we wasted 80 billion dollars on VR, and that hurts. But if we can somehow to convince all of our competitors to also waste 80 billion dollars each, then it’ll even out. How can we trick our competitors into thinking more like Zuckerberg?”
Am I arguing against this? I don't know - I'm not an economist. But I would like to point out there is such a thing as shareholder fraud and the venn diagram between "sacrifice quality to please shareholders" and "deceiving shareholders" has to be one big intersecting circle, you know? Especially when the guy (Zuckerberg with dual-class shares) can't ever be fired
The FT piece says "They added that the character was being trained on the billionaire’s mannerisms, tone and publicly available statements, as well as his own recent thinking on company strategies, so that employees might feel more connected to the founder through interactions with it."
Surely the more likely outcome is that employees feel less connected to "the founder" because they know that there's a high chance they are simply talking to an AI clone?
Also... is that a thing most people want?
Is this a meaningful replacement for that? Probably not, but I’m not prepared to rule it out. Give 1 in 1000 Claudes a Zuckerberg persona and you’d get some chuckles out of it I bet.
https://www.ft.com/content/02107c23-6c7a-4c19-b8e2-b45f4bb9c...
https://archive.is/mtVXJ
That way AI-AI can chat and save humans’ time.
Zuckerberg has unique power among CEOs in public companies. He controls the board and he owns a majority of voting shares.
Sure they can theoretically sue him for some kind of gross mismanagement of the company or disloyalty, but why would the owner class do that? Investors are all in on AI replacing human workers. If they think Zuckerberg doing this is wrong, they would imply AI should not work in place of humans.
They can really only sue for breach of fiduciary duty. Zuckerberg controls the majority, but there are still limits on abusing the minority. I’m not sure making an AI clone falls afoul of any rules.
If the AI clone is not empowered to make decisions on Zuck's behalf, what's the point? If it is empowered to make decisions on his behalf, who is accountable for bad decisions it makes?