The point they seem to be making is that AI can "orchestrate" the real world even if it can't interact physically. I can definitely believe that in 2026 someone at their computer with access to money can send the right emails and make the right bank transfers to get real people to grow corn for you.
However even by that metric I don't see how Claude is doing that. Seth is the one researching the suppliers "with the help of" Claude. Seth is presumably the one deciding when to prompt Claude to make decisions about if they should plant in Iowa in how many days. I think I could also grow corn if someone came and asked me well defined questions and then acted on what I said. I might even be better at it because unlike a Claude output I will still be conscious in 30 seconds.
That is a far cry from sitting down at a command like and saying "Do everything necessary to grow 500 bushels of corn by October".
These experiments always seems to end up requiring the hand-holding of a human at top, seemingly breaking down the idea behind the experiment in the first place. Seems better to spend the time and energy on finding better ways for AI to work hand-in-hand with the user, empowering them, rather than trying to find the areas where we could replace humans with as little quality degradation as possible. That whole part feels like a race to the bottom, instead of making it easier for the ones involved to do what they do.
>ather than trying to find the areas where we could replace humans with as little quality degradation as possible
The particular problem here is it is very likely that the easiest people to replace with AI are the ones making the most money and doing the least work. Needless to say those people are going to fight a lot harder to remain employed than the average lower level person has political capital to accomplish.
>seems to end up requiring the hand-holding of a human at top,
I was born on a farm and know quite a bit about the process, but in the process of trying to get corn grown from seed to harvest I would still contact/contract a set of skilled individuals to do it for me.
One thing I've come to realize in the race to achieve AGI, the humans involved don't want AGI, they want ASI. A single model that can do what an expert can, in every field, in a short period of time is not what I would consider a general intelligence at all.
> I can definitely believe that in 2026 someone at their computer with access to money can send the right emails and make the right bank transfers to get real people to grow corn for you.
I think this is the new turing test. Once it's been passed we will have AGI and all the Sam Altmans of the world will be proven correct. (This isn't a perfect test obviously, but neither was the turing test)
If it fails to pass we will still have what jdthedisciple pointed out
> a non-farmer, is doing professional farmer's work all on his own without prior experience
I am actually curious how many people really believe AGI will happen. Theres alot of talk about it, but when can I ask claude code to build me a browser from scratch and I get a browser from scratch. Or when can I ask claude code to grow corn and claude code grows corn. Never? In 2027? In 2035? In the year 3000?
HN seems rife with strong opinions on this, but does anybody really know?
I think once we get off LLM's and find something that more closely maps to how humans think, which is still not known afaik. So either never or once the brain is figured out.
Researchers love to reduce everything into formulae, and believe that when they have the right set of formulae, they can simulate something as-is.
Hint: It doesn't work that way.
Another hint: I'm a researcher.
Yes, we have found a great way to compress and remix the information we scrape from the internet, and even with some randomness, looks like we can emit the right set of tokens which makes sense, or search the internet the right way and emit these search results, but AGI is more than that.
There's so much tacit knowledge and implicit computation coming from experience, emotions, sensory inputs and from our own internal noise. AI models doesn't work on those. LLMs consume language and emit language. The information embedded in these languages are available to them, but most of the tacit knowledge is just an empty shell of the thing we try to define with the limited set of words.
It's the same with anything we're trying to replace humans in real world, in daily tasks (self-driving, compliance check, analysis, etc.).
AI is missing the magic grains we can't put out as words or numbers or anything else. The magic smoke, if you pardon the term. This is why no amount of documentation can replace a knowledgeable human.
...or this is why McLaren Technology Center's aim of "being successful without depending on any specific human by documenting everything everyone knows" is an impossible goal.
Because like it or not, intuition is real, and AI lacks it. Irrelevant of how we derive or build that intuition.
> There's so much tacit knowledge and implicit computation coming from experience, emotions, sensory inputs and from our own internal noise.
The premise of the article is stupid, though...yes, they aren't us.
A human might grow corn, or decide it should be grown. But the AI doesn't need corn, it won't grown corn, and it doesn't need any of the other things.
This is why, they are not useful to us.
Put it in science fiction terms. You can create a monster, and it can have super powers, _but that does not make it useful to us_. The extremely hungry monster will eat everything it sees, but it won't make anyone's life better.
Using the example from the article, I guess restaurant managers need handholding by the chefs and servers, seemingly breaking down the idea behind restaurants, yet restaurants still exist.
The point, I think, is that even if LLMs can't directly perform physical operations, they can still make decisions about what operations are to be performed, and through that achieve a result.
And I also don't think it's fair to say there's no point just because there's a person prompting and interpreting the LLM. That happens all the time with real people, too.
> And I also don't think it's fair to say there's no point just because there's a person prompting and interpreting the LLM. That happens all the time with real people, too.
Yes, what I'm trying to get at, it's much more vital we nail down the "person prompting and interpreting the LLM" part instead of focusing so much on the "autonomous robots doing everything".
I feel you're still missing the point of the experiment... The entire thing was based on how Claude felt empowering -- "I felt like I could do anything with software from my terminal"... It's not at all about autonomous robots... It's about what someone can achieve with the assistance of LLMs, in this case Claude
Right. This whole process still appears to have a human as the ultimate outer loop.
Still an interesting experiment to see how much of the tasks involved can be handled by an agent.
But unless they've made a commitment not to prompt the agent again until the corn is grown, it's really a human doing it with agentic help, not Claude working autonomously.
> But unless they've made a commitment not to prompt the agent again
Model UI's like Gemini have "scheduled actions" so in the initial prompt you could have it do things daily and send updates or reports, etc, and it will start the conversation with you. I don't think its powerful enough to say spawn sub agents but there is some ability for them to "start chats".
Why wouldn't they be able to eventually set it up to work autonomously? A simple github action could run a check every $t hour to check on the status, and an orchestrator is only really needed once initially to set up the if>then decision tree.
The question is whether the system can be responsible for the process. Big picture, AI doing 90% of the task isn't much better than it doing 50%, because a person still needs to take responsibility for it actually getting done.
If Claude only works when the task is perfectly planned and there are no exceptions, that's still operating at the "junior" level, where it's not reliable or composable.
That still doesn't seem autonomous in any real way though.
There are people that I could hire in the real world, give $10k (I dunno if that's enough, but you understand what I mean) and say "Do everything necessary to grow 500 bushels of corn by October", and I would have corn in October. There are no AI agents where that's even close to true. When will that be possible?
Nobody is denying that this is AI-enabled but that's entirely different from "AI can grow corn".
Also Seth a non-farmer was already capable of using Google, online forums, and Sci-Hub/Libgen to access farming-related literature before LLMs came on the scene. In this case the LLM is just acting as a super-charged search engine. A great and useful technology, sure. But we're not utilizing any entirely novel capabilities here
And tbh until we take a good crack at World Models I doubt we can
I think is that a lot of professional work is not about entirely novel capabilities either, most professionals get the major revenue from bread and butter cases that apply already known solutions to custom problems. For instance, a surgeon taking out an appendix is not doing a novel approach to the problem every time.
1) You are right and its impressive if he can use AI to bootstrap becoming a farmer
2) Regardless, I think it proves a vastly understated feature of AI: It makes people confident.
The AI may be truly informative, or it may hallucinate, or it may simply give mundane, basic advice. Probably all 3 at times. But the fact that it's there ready to assert things without hesitation gives people so much more confidence to act.
You even see it with basic emails. Myself included. I'm just writing a simple email at work. But I can feed it into AI and make some minor edits to make it feel like my own words and I can just dispense with worries about "am i giving too much info, not enough, using the right tone, being unnecessarily short or overly greating, etc." And its not that the LLMs are necessarily even an authority on these factors - it simply bypasses the process (writing) which triggers these thoughts.
I started to write a logical rebuttal, but forget it. This is just so dumb. A guy is paying farmers to farm for him, and using a chatbot to Google everything he doesn't know about farming along the way. You're all brainwashed.
What specifically are you disagreeing with? I dont think its trivial for someone with no farming experience to successfully farm something within a year.
>A guy is paying farmers to farm for him
Read up on farming. The labor is not the complicated part. Managing resources, including telling the labor what to do, when, and how is the complicated part. There is a lot of decision making to manage uncertainty which will make or break you.
My family raises hundreds of thousands of chickens a year. They feed, water, and manage the healthcare and building maintenance for the birds. That is it. Baby birds show up in boxes at the start of a season, and trucks show up and take the grown birds once they reach weight.
There is a large faceless company that sends out contracts for a particular value and farmers can decide to take or leave it. There is zero need for human contact on the management side of the process.
At the end of the day there is little difference between a company assigning the work and having a bank account versus an AI following all the correct steps.
Thats not the point of the original commenter. The point of the original commenter is that he expects Claude can inform him well enough to be a farm manager and its not impressive since Seth is the primary agent.
I think it is impressive if it works. Like I mentioned in a sibling comment I think it already definitely proves something LLMs have accomplished though, and that is giving people tremendous confidence to try things.
It only works if you tell Claude..."grow me some fucking corn profitably and have it ready in 9 months" and it does it.
If it's being used as manager to simply flesh out the daily commands that someone is telling it, well then that isn't "working" thats just a new level of what we already have with APIs and crap.
So if I grow biomass for fuel or feedstock for plastics that's not farming? I'm sure there are a number of people that would argue with you on that.
I'm from the part of the country where there large chunks of land dedicated to experimental grain growing, which is research, and other than labels at the end of crop rows you'd have a difficult time telling it from any other farm.
I think that’s the point though. If they succeeded in the experiment, they wouldn’t need to do the same instructions again, AI will handle everything based on what happened and probably learn from mistakes for the next round(s).
Then what you asked “do everything to grow …” would be a matter of “when?”, not “can?”
Isnt this boiled down to a cpmination of Xenos paradox and the halting problem. Every step seems to halve the problem state but each new state requires a question: should I halt? (Is the problem solved).
Id say the only acceptable proof is one prompt context. But thats godels numbering Xenos paradox of a halting problem.
Do people think prompting is not adding insignificant intelligencw.
Polk County Iowa is where Des Moines is - the largest city in Iowa. (I live the next county over, but I bike to Polk county all the time) This is not a good location to run this because the farm land is owned by farmer/investors or farmer/developers - either way everybody knows the farm will become a suburb in the next 20 years and has priced accordingly (and if the timeline is is less than 5 years they have switched to mining mode - strip out the last fertility before the development destroys the land anyway). Which is to say you can get much better land deals elsewhere (and by making your search wider) - sometimes the price might be higher but that is because the land/soil is better.
Overall I don't think this is useful. They might or might not get good results. However it is really hard to beat the farmer/laborer who lives close to the farm and thus sees things happen and can react quickly. There is also great value in knowing your land, though they should get records of what has happened in the past (this is all in a computer, but you won't always get access to it when you buy/lease land). Farmers are already using computers to guide decisions.
My prediction: they lose money. Not because the AI does stupid things (though that might happen), but because last year harvests were really good and so supply and demand means many farms will lose money no matter what you do. But if the weather is just right he could make a lot of money when other farmers have a really bad harvest (that is he has a large harvest but everyone else has a terrible harvest).
Iowa has strong farm ownership laws. There is real risk he will get shutdown somehow because what he is doing is somehow illegal. I'm not sure what the laws are, check with a real lawyer. (This is why Bill Gates doesn't own Iowa farm land - he legally can't do what he wants with Iowa farm land)
If you spend time on the website you can see the plan is to rent (only!) 5 acres of land for this project. Since it's a lease only and such a small plot it seems unlikely to get him into trouble. Given the small size though I'm dubious he'll find it easy to get any custom operators interested in doing a job that small!
You can find such custom operators - but those are not deal made over the internet, they are made in person with a handshake. Generally the cost to get all the equipment there is - in a good year - all of your possible profit for something that small. Tractors are slow on the road. Once the tractor is there the implement needs to unfold (best case - worse case your combine header is pulled in via a separate truck and needs to be attached). You need to clean the machine out after every field and put new seed in... It isn't worth planting 5 acres of corn. You need volume - and in turn a lot of land - to make corn work.
Agreed. Growing up on a small farm (~1120 acres) our garden alone was probably at least 5 acres in size. It's laughably small, the only way he'll succeed is for a neighbouring farmer to take pity on him.
It reminds me of when I worked at an ag tech startup for a few years. We visited farms up and down the central valley of California, and the general tone toward Silicon Valley is an intense dislike of overconfident 20-somethings with a prototype who think they're going to revolutionize agriculture in some way, but are far, far away from having enough context to see the constraints they're operating under and the tradeoffs being made.
Replacing the farm manager with an AI multiplies that problem by a hundred. A thousand? A million? A lot. AI may get some sensor data but it's not going to stick its hand in the dirt and say "this feels too dry". It won't hear the weird pinging noise that the tractor's been making and describe it to the mechanic. It may try to hire underlings but, how will it know which employees are working hard and which ones are stealing from it? (Compare Anthropic's experiments with having AI run a little retail store, and get tricked into selling tungsten cubes at a steep discount.)
I got excited when I opened the website and at first had the impression that they'd actually gotten AI to grow something. Instead it's built a website and sent some emails. Not worth our attention, yet.
Bill gates is one of the largest farmland owners in the world (or at least was - I last checked about 10 years ago...) He hires people to work on his farm, and managers to manage it. Food is the most important thing for modern society and the reports I have suggest he is trying to raise food in the most sustainable fashion possible (organic is often not sustainable)
That's all rich people do. The premise of capitalism is that the people best at collecting rent should also be in total control of resource allocation.
I'm not a huge fan of these experiments that subject the public to your random AI spam. So far it's bothered 10 companies directly with no legal authority to actually follow up with what is requested?
I think they're saying that businesses getting unsolicited offers from the LLM is similar to regular people getting unsolicited offers from businesses.
You can actually act on the advertisements and coupons, though. And the companies who sent those offers to you are obligated to abide by them. This potentially would be like if you got a BOGO coupon in the mail and when you tried to redeem it, they just pretended like it didn't exist.
>So far it's bothered 10 companies directly with no legal authority to actually follow up with what is requested?
Aren't these companies in the business of leasing land? I dont see how contacting them about leasing land would be spam or bothering them. And I dont really know what you mean by "with no legal authority to actually follow up with what is requested."
I mean, it's probably worse to pretend to be an actual customer, rather than sending some random message. The AI's obviously never going to actually lease any land, so all its doing is convincingly wasting their time. At least landlords are often quite unsympathetic, so it's probably fine to waste their time a bit.
As a full-on farmer, the idea of Claude making the decisions on our farm of several thousand acres gives me the willies. I program with Claude and I don't trust it to write a test script without vetting it thoroughly and fixing a couple things before running it.
Betting millions of dollars in capital on it's decision making process for something it wasn't even designed for and is way more complicated than even I believed coming from a software background into farming is patently ludicrous.
And 5 acres is a garden. I doubt he'll even find a plot to rent at that size, especially this close to seeding in that area.
"there's a gap between digital and physical that AI can't cross"
Can intelligence of ANY kind, artificial or natural, grow corn? Do physical things?
Your brain is trapped in its skull. How does it do anything physical?
With nerves, of course. Connected to muscle. It's sending and receiving signals, that's all its doing! The brain isn't actually doing anything!
The history of humanity's last 300k years tells you that intelligence makes a difference, even though it isn't doing anything but receiving and sending signals.
I can't tell which side you're arguing here. But if the AI was strapped onto a roomba that rolled around and planted, watered and harvested the corn, I would count that.
It's extremely funny to me but this is basically the literal premise of season two of Person of Interest. Yeah d'uh it's just a computer how would it actually do anything? Well it just goes ahead and tells people to do stuff and wires them money. Easy.
Though a computer could also just control robots that actually plant, weed, water, and harvest the corn. That's a pretty big difference from just 'coordinating' the work.
An AI that can also plant corn itself (via robots it controls) is much more impressive to me than an AI just send emails.
I can't be the only person seriously questioning the "Budget" page the AI created?[1]
The estimate seems to leave out a lot of factors, including irrigation, machinery, the literal seeds, and more. $800 for a "custom operator" for 7 months - I don't believe it. Leasing 5 acres of farmable land (for presumably a year) for less than $1400... I don't believe it.
The humans behind this experiment are going to get very tired of reading "Oh, you're right..." over and over - and likely end up deeply underwater.
It's cute but it seems like it's mostly going to come down to hiring a person to grow corn. Pretty cool that an AI can (sort of) do that autonomously but it's not quite the spirit of the challenge.
Right. If this level of indirection is allowed, the most efficient way to "grow corn" by the light of the original post would simply be to buy and hold Farmland Partners Inc (NYSE: FPI).
I'd like to see Fred follow right along and allocate the same amount of funds for deployment starting at the same time as each of Seth's expenditures or solid commitments.
The timing might need to be different but it would be good to see what the same amounts invested would yield from corn on the commodity market as well as from securities in farming partnerships.
Would it be fair if AI was used to play these markets too, or in parallel?
It would be interesting to see how different "varieties" of corn perform under the same calendar season.
Corn, nothing but corn as the actual standard of value :)
You don't get much any way you look at it for your $12.99 but it's a start.
Making a batch of popcorn now, I can already smell the demand on the rise :)
Yeah, this feels right on the cusp of being interesting. I think that, being charitable, it could be interesting if it turns out to be successful in hiring and coordinating several people and physical assets over a long time horizon. For example, it'd be pretty cool if it could:
1. Do some research (as it's already done)
2. Rent the land and hire someone to grow the corn
3. Hire someone to harvest it, transport it, and store it
4. Manage to sell it
Doing #1 isn't terribly exciting - it's well established that AIs are pretty good at replacing an hour of googling - but if it could run a whole business process like this, that'd be neat.
Is that actually growing corn with AI though? Seems to me that a human planted the corn, thinned it, weeded it, harvested it, and stored it. What did AI do in that process? Send an email?
It is trying to take over the job of the farmer. Planting, harvesting, etc. is the job of a farmhand (or custom operator). Everyone is working to try to automate the farmhand out of a job, but the novelty here is the thinking that it is actually the farmer who is easiest to automate away.
But,
"I will buy fucking land with an API via my terminal"
Who has multiple millions of dollars to drop on an experiment like that?
> [Seth is using AI to try] to take over the job of the farmer. Planting, harvesting, etc. is the job of a farmhand (or custom operator).
Ok then Seth is missing the point of the challenge: Take over the role of the farmhand.
> Everyone is working to try to automate the farmhand out of a job, but the novelty here is the thinking that it is actually the farmer who is easiest to automate away.
Everyone knows this. There is nothing novel here. Desk jockeys who just drive computers all day (the Farmer in this example) are _far_ easier to automate away than the hands-on workers (the farmhand). That’s why it would be truly revolutionary to replace the farmhand.
Or, said another way: Anything about growing corn that is “hands on” is hard to automate, all the easy to automate stuff has already been done. And no, driving a mouse or a web browser doesn’t count as “hands on”.
> all the easy to automate stuff has already been done.
To be fair, all the stuff that hasn't been automated away is the same in all cases, farmer and farmhand alike: Monitoring to make sure the computer systems don't screw up.
The bet here is that LLMs are past the "needs monitoring" stage and can buy a multi-million dollar farm, along with everything else, without oversight and Seth won't be upset about its choices in the end. Which, in fairness, is a more practical (at least less risky form a liability point of view) bet than betting that a multi-million dollar X9 without an operator won't end up running over a person and later upside-down in the ditch.
He may have many millions to spend on an experiment, but to truly put things to the test would require way more than that. Everyone has a limit. An MVP is a reasonable start. v2 can try to take the concept further.
There is more than that. He needs to decide which corn seed to plant (he is behind here - seed companies run sales if you order in October for delivery in mid march). He needs to decide what fertilizer to apply, and when. He needs to monitor the crop - he might or might not need to buy and apply a fungicide. He needs to decide when to harvest - too early and he pays a lot of money to dry the corn (and likely money to someone you hired to work who doesn't do anything), but too late and a storm can blow the corn off the cob... Those are just a few of the things a farmer needs to figure out that the AI would need to do (but will it)
There are plenty of CCAs out there that will happily do all those things for you. If hiring someone to come work the field is fair game, surely that is too?
Also what's the delta b/w Claude Code doing it and you doing it?
I would have to look up farm services. Look up farmhand hiring services. Write a couple emails. Make a few payments. Collect my corn after the growing season. That's not an insurmountable amount of effort. And if we don't care about optimizing cost, it's very easy.
Also, how will Claude monitor the corn growing, I'm curious. It can't receive and respond to the emails autonomously so you still have to be in the loop
Tell that to all the car accidents caused by people distracted by siri, the people who’ve done horrible things because of AI induced psychosis, or the lives ruined by ai stock trading algorithms.
Why do I need to help? Is this an experiment to see if it can do it on its own, or just another "project" where they give AI credit for human's work for marketing purposes?
(And if you read the linked post, … like this value function is established on a whim, with far less thought than some of the value-functions-run-amok in scifi…)
This isn't really an impressive test; growing corn is an extremely well-documented solved problem, the sort of thing that we already know LLMs excel at. An LLM that couldn't reliably tell you what to do at each step of the corn-farming process would be a very poor LLM.
This seems like something along the lines of "We know we can use Excel to calculate profit/loss for a Mexican restaurant, but will it work for a Tibetan-Indonesian fusion restaurant? Nobody's ever done that before!"
I'll be following along, and I'm curious what kind of harness you'll put on TOP of Claude code to avoid it stalling out on "We have planted 16/20 fields so far, and irrigated 9/16. Would you like me to continue?"
I'd also like to know what your own "constitution" is regarding human oversight and intervention. Presumably you wouldn't want your investment to go down the drain if Claude gets stuck in a loop, or succumbs to a prompt injection attack to pay a contractor 100% of it's funds, or decides to water the fields with Brawndo.
How much are you allowing yourself to step in, and how will you document those interventions?
I find it intresting if the AI can Produce real Corn on other planet. If this work, i think when AI is getting resource of other planet soil, then it going to be a great help for Humanity
> AI doesn't need to drive a tractor. It needs to orchestrate the systems and people who do.
I've been rather expecting AI to start acting as a manager with people as its arms in the real world. It reminds me of the Manna short story[1], where it acts as a people manager with perfect intelligence at all times, interconnected not only with every system but also with other instances in other companies (e.g. for competitive wage data to minimize opex / pay).
Yeah I came here to post this. This is the other thing we're going to see. And it doesn't have to be perfect to orchestrate people, it just has to be mediocre or better and it will be better than 50% of humans.
contra-pessimism: My parents run a small organic farm on the east coast — (greenhouses, not row crops) and they extensively use chatgpt for decision making They obviously haven’t built out agentic data gathering, but can easily prompt it with the required information. they’re quite happy with everything.
I’m guessing this will screw up in assuming infinite labor & equipment liqudity.
The people already doing this work today already do exactly that.
There's no goalpost shifting here - it's l'art pour l'art at its finest.
It'd be introducing an agent where no additional agent agent is required in the first place, i.e. telling a farmer how to do their job, when they already now how to and do it in the first place.
No one needs an LLM if you can just lease some land and then tell some person to tend to it, (i.e. doing the actual work). It's baffling to me how out of touch with reality some people are.
Want to grow corn? Take some corn, put it in the ground in your backyard and harvest when it's ready. Been there, done that, not a challenge at all. Want to do it at scale? Lease some land, buy some corn, contract a farmer to till the land, sow the corn, and eventually harvest it. Done. No LLM required. No further knowledge required. Want to know when the best time for each step is? Just look at when other farmers in the area are doing it. Done.
If this is a joke, it's a bad one. If it's not, it's even dumber.
The point could be made by having it design and print implements for an indoor container grow and then run lights and water over a microcontroller. Like Anthropic's vending machine this would also be an already addressed, if not solved, space for both home manufacturing and ag/garden automation.
It'd still be novel to see an LLM figure it out from scratch step by step, and a hell of a lot more interesting than whatever the fuck this is. Googling farmland in Iowa or Texas and then writing instructions for people to do the actual work isn't novel or interesting; of course an LLM can write and fill out forms. But the end result still primarily relies on people to execute those forms and affect the world, invalidating the point. Growing corn would be interesting, project managing corn isn't.
Really cool - even cooler if some farming related hardware on a designated plot of land can be setup so it's more than an ai agent finding someone to hire via apis.
I'm waiting for the "Can it do Management?" experiment.
I do not have a positive impression/experience of most middle/low level management in corporate world. Over 30 years in the workforce, I've watched it evolve to a "secretary/clerk, usually male, who agrees to be responsible for something they know little about or not very good at doing, pretend at orchestrating".
Like growing corn, lots of literature has been written about it. So models have lots to work with and synthesize. Why not automate the meetings and metric gatherings and mindless hallucinations and short sighted decisions that drone-ish be-like-the-other-manager people do?
HN type "about the website itself, not it's content" comment but ... it would be great if we could somehow get the major browser vendors to agree on some monospaced fonts. I'm on M1 Mac and the small ASCII diagram, nothing lines up (Safari/Firefox/Chrome). I see this on many ASCII diagrams. Maybe that's the site's fault. Not sure)
This is lopsided. Technology promised to remove drudgery from our lives, and now we're seeing experiments that automate all the easy, air-conditioned decision making while still delegating the toil to humans
I have zero doubt Claude is going to do what AI does and plough forward. Emails will get sent, recommendations made, stuff done.
And it will be slop. Worse than what it does with code, the outcomes of which are highly correlated with the expertise of the user past a certain point.
Seth wins his point. AI can, via humans giving it permission to do things, affect the world. So can my chaos monkey random script.
Fred should have qualified: _usefully_ affect the world. Deliver a margin of Utility.
Given how the front page's ASCII diagram is misaligned on my browser, I think I have a few concerns about factors that might lead to, well, oversights...
Ha! I was going to disagree, but then I realized I force most monospace html entities to use a specific font in my browser (using a wildcard stylus stylesheet). This is quite nice to normalize things (and sidestep atrocious font choices) actually.
See also: Clarkson's Farm [0], for some of the messy reality of running an actual modern farm in England (though edited for entertainment value). I suspect the current AIs are not quite up to doing this - but I firmly beleive it's only a matter of time.
I think the most intriguing part of this effort: Farmers traditionally employ machines to achieve their harvest. Unless I'm mistaken, this is the first time that machines are employing humans to achieve their harvest.
I mean, more or less, but you see what I'm getting at.
It depends on the crop.
Corn (Maize): Harvested using combine harvesters that pick, husk, and shell the grain. Sweet Corn might be the exception.
Soybeans: Harvested using combines to cut and thresh the plants.
Wheat, Barley, and Oats: Harvested using combines to cut, thresh, and clean the grain.
Cotton: Harvested mechanically using cotton pickers or strippers.
Rice: Mechanically harvested with combines when the stalks are dry.
Potatoes and Root Vegetables: Lifted from the ground using mechanical harvesters that separate soil from the produce.
Lettuce, Spinach, and Celery: Mostly hand-harvested by crews, though automation is increasing.
Berries (Strawberries, Blueberries): Primarily hand-picked for fresh market quality, though some are machine-harvested for processing.
Tree Fruits (Apples, Cherries): Mostly hand-picked to prevent bruising, though some processing cherries use tree shakers.
Wine Grapes: Frequently harvested by hand to ensure quality, especially for high-end wines.
Peppers and Tomatoes: Processed tomatoes are machine-harvested, while fresh peppers are largely hand-picked.
I can make corn too. I go to the supermarket and hand them these little green pieces of paper, and then I have corn.
Seriously, what does this prove? The AI isn't actually doing anything, it's just online shopping basically. You're just going to end up paying grocery store prices for agricultural quantities of corn.
This actually is a good summary of my theory of AI. The best use case for AI is replacing management. Thats the real reason AI is floundering right now with making money. The people in charge would literally need to admit that they are basically no longer needed and act accordingly.
This of course will never happens so instead those in power will continue to try to shoehorn AI into making slaves which is what they want, but not the ideal usage for AI.
Several things about LLMs make this a hard or complex experiment and maybe too much for the current tech.
1) context: lack of sensors and sensor processing, maybe solvable with web cams in the field but manual labor required for soil testing etc
2)Time bias: orchestration still has a massive recency bias in LLMs and a huge underweighting of established ground truth. Causing it to weave and pivot on recent actions in a wobbly overcorrecting style.
3) vagueness: by and large most models still rely on non committal vagueness to hide a lack of detailed or granular expertise. This granular expertise tends to hallucinate more or just miss context more and get it wrong.
I’m curious how they plan to overcome this. It’s the right type of experiment, but I think too ambitious of a scale.
AI middle managers are coming. The highest-level corporate authority can and will continue to exist as a person that makes sure the AI systems are running correctly and skim profits off the top of the AI substructure, with the lowest stratum being an underclass precariat doing the hands-on tickets from an AI agent at a continuously adjusted market price for the task.
Eventually robots will do this but as long as humans do the actual irl actions it makes me think of a dystopian future where all leadership decision are made by harsh micromanaging AI bosses and low paying physical labor is the only job around for humans.
> AI doesn't need to drive a tractor. It needs to orchestrate the systems and people who do.
If people are involved then it's not an autonomous system. You could replace the orchestrator with the average logic defined expert system. Like come on, farming AGVs have come a long way, at least do it properly.
It's...interesting but I feel like people keep forgetting that LLMs like Claude don't really...think(?). Or learn. Or know what 'corn' or a 'tractor' is. They don't really have any memory of past experiences or a working memory of some current state.
They're (very impressive) next word predictors. If you ask it 'is it time to order more seeds?' and the internet is full of someone answering 'no' - that's the answer it will provide. It can't actually understand how many there currently are, the season, how much land, etc, and do the math itself to determine whether it's actually needed or not.
You can babysit it and engineer the prompts to be as leading as possible to the answer you want it to give - but that's about it.
I think you could have credibly said this for a while during 2024 and earlier, but there is a lot of research that indicates LLMs are more than stochastic parrots, as some researchers claimed earlier on. Souped up versions of LLMs have performed at the gold medal level in the IMO, which should give you pause in dismissing them. "It can't actually understand how many there currently are, the season, how much land, etc, and do the math itself to determine whether it's actually needed or not" --- modern agents actually can do this.
The worlds most impressive stochastic parrot, resulting from billions of dollars of research by some of the world's most advanced mathematicians and computer scientists.
And capable of some very impressive things. But pretending their limitations don't exist doesn't serve anyone.
However even by that metric I don't see how Claude is doing that. Seth is the one researching the suppliers "with the help of" Claude. Seth is presumably the one deciding when to prompt Claude to make decisions about if they should plant in Iowa in how many days. I think I could also grow corn if someone came and asked me well defined questions and then acted on what I said. I might even be better at it because unlike a Claude output I will still be conscious in 30 seconds.
That is a far cry from sitting down at a command like and saying "Do everything necessary to grow 500 bushels of corn by October".
The particular problem here is it is very likely that the easiest people to replace with AI are the ones making the most money and doing the least work. Needless to say those people are going to fight a lot harder to remain employed than the average lower level person has political capital to accomplish.
>seems to end up requiring the hand-holding of a human at top,
I was born on a farm and know quite a bit about the process, but in the process of trying to get corn grown from seed to harvest I would still contact/contract a set of skilled individuals to do it for me.
One thing I've come to realize in the race to achieve AGI, the humans involved don't want AGI, they want ASI. A single model that can do what an expert can, in every field, in a short period of time is not what I would consider a general intelligence at all.
I think this is the new turing test. Once it's been passed we will have AGI and all the Sam Altmans of the world will be proven correct. (This isn't a perfect test obviously, but neither was the turing test)
If it fails to pass we will still have what jdthedisciple pointed out
> a non-farmer, is doing professional farmer's work all on his own without prior experience
I am actually curious how many people really believe AGI will happen. Theres alot of talk about it, but when can I ask claude code to build me a browser from scratch and I get a browser from scratch. Or when can I ask claude code to grow corn and claude code grows corn. Never? In 2027? In 2035? In the year 3000?
HN seems rife with strong opinions on this, but does anybody really know?
Hint: It doesn't work that way.
Another hint: I'm a researcher.
Yes, we have found a great way to compress and remix the information we scrape from the internet, and even with some randomness, looks like we can emit the right set of tokens which makes sense, or search the internet the right way and emit these search results, but AGI is more than that.
There's so much tacit knowledge and implicit computation coming from experience, emotions, sensory inputs and from our own internal noise. AI models doesn't work on those. LLMs consume language and emit language. The information embedded in these languages are available to them, but most of the tacit knowledge is just an empty shell of the thing we try to define with the limited set of words.
It's the same with anything we're trying to replace humans in real world, in daily tasks (self-driving, compliance check, analysis, etc.).
AI is missing the magic grains we can't put out as words or numbers or anything else. The magic smoke, if you pardon the term. This is why no amount of documentation can replace a knowledgeable human.
...or this is why McLaren Technology Center's aim of "being successful without depending on any specific human by documenting everything everyone knows" is an impossible goal.
Because like it or not, intuition is real, and AI lacks it. Irrelevant of how we derive or build that intuition.
The premise of the article is stupid, though...yes, they aren't us.
A human might grow corn, or decide it should be grown. But the AI doesn't need corn, it won't grown corn, and it doesn't need any of the other things.
This is why, they are not useful to us.
Put it in science fiction terms. You can create a monster, and it can have super powers, _but that does not make it useful to us_. The extremely hungry monster will eat everything it sees, but it won't make anyone's life better.
The point, I think, is that even if LLMs can't directly perform physical operations, they can still make decisions about what operations are to be performed, and through that achieve a result.
And I also don't think it's fair to say there's no point just because there's a person prompting and interpreting the LLM. That happens all the time with real people, too.
Yes, what I'm trying to get at, it's much more vital we nail down the "person prompting and interpreting the LLM" part instead of focusing so much on the "autonomous robots doing everything".
Still an interesting experiment to see how much of the tasks involved can be handled by an agent.
But unless they've made a commitment not to prompt the agent again until the corn is grown, it's really a human doing it with agentic help, not Claude working autonomously.
Model UI's like Gemini have "scheduled actions" so in the initial prompt you could have it do things daily and send updates or reports, etc, and it will start the conversation with you. I don't think its powerful enough to say spawn sub agents but there is some ability for them to "start chats".
If Claude only works when the task is perfectly planned and there are no exceptions, that's still operating at the "junior" level, where it's not reliable or composable.
There are people that I could hire in the real world, give $10k (I dunno if that's enough, but you understand what I mean) and say "Do everything necessary to grow 500 bushels of corn by October", and I would have corn in October. There are no AI agents where that's even close to true. When will that be possible?
Also Seth a non-farmer was already capable of using Google, online forums, and Sci-Hub/Libgen to access farming-related literature before LLMs came on the scene. In this case the LLM is just acting as a super-charged search engine. A great and useful technology, sure. But we're not utilizing any entirely novel capabilities here
And tbh until we take a good crack at World Models I doubt we can
2) Regardless, I think it proves a vastly understated feature of AI: It makes people confident.
The AI may be truly informative, or it may hallucinate, or it may simply give mundane, basic advice. Probably all 3 at times. But the fact that it's there ready to assert things without hesitation gives people so much more confidence to act.
You even see it with basic emails. Myself included. I'm just writing a simple email at work. But I can feed it into AI and make some minor edits to make it feel like my own words and I can just dispense with worries about "am i giving too much info, not enough, using the right tone, being unnecessarily short or overly greating, etc." And its not that the LLMs are necessarily even an authority on these factors - it simply bypasses the process (writing) which triggers these thoughts.
>A guy is paying farmers to farm for him
Read up on farming. The labor is not the complicated part. Managing resources, including telling the labor what to do, when, and how is the complicated part. There is a lot of decision making to manage uncertainty which will make or break you.
Family of farmers here.
My family raises hundreds of thousands of chickens a year. They feed, water, and manage the healthcare and building maintenance for the birds. That is it. Baby birds show up in boxes at the start of a season, and trucks show up and take the grown birds once they reach weight.
There is a large faceless company that sends out contracts for a particular value and farmers can decide to take or leave it. There is zero need for human contact on the management side of the process.
At the end of the day there is little difference between a company assigning the work and having a bank account versus an AI following all the correct steps.
Pedantically, that's what a farmer does. The workers are known as farmhands.
I think it is impressive if it works. Like I mentioned in a sibling comment I think it already definitely proves something LLMs have accomplished though, and that is giving people tremendous confidence to try things.
It only works if you tell Claude..."grow me some fucking corn profitably and have it ready in 9 months" and it does it.
If it's being used as manager to simply flesh out the daily commands that someone is telling it, well then that isn't "working" thats just a new level of what we already have with APIs and crap.
So if I grow biomass for fuel or feedstock for plastics that's not farming? I'm sure there are a number of people that would argue with you on that.
I'm from the part of the country where there large chunks of land dedicated to experimental grain growing, which is research, and other than labels at the end of crop rows you'd have a difficult time telling it from any other farm.
TL:DR, why are you gatekeeping this so hard?
I'll see if my 6 year old can grow corn this year.
Then what you asked “do everything to grow …” would be a matter of “when?”, not “can?”
Id say the only acceptable proof is one prompt context. But thats godels numbering Xenos paradox of a halting problem.
Do people think prompting is not adding insignificant intelligencw.
Overall I don't think this is useful. They might or might not get good results. However it is really hard to beat the farmer/laborer who lives close to the farm and thus sees things happen and can react quickly. There is also great value in knowing your land, though they should get records of what has happened in the past (this is all in a computer, but you won't always get access to it when you buy/lease land). Farmers are already using computers to guide decisions.
My prediction: they lose money. Not because the AI does stupid things (though that might happen), but because last year harvests were really good and so supply and demand means many farms will lose money no matter what you do. But if the weather is just right he could make a lot of money when other farmers have a really bad harvest (that is he has a large harvest but everyone else has a terrible harvest).
Iowa has strong farm ownership laws. There is real risk he will get shutdown somehow because what he is doing is somehow illegal. I'm not sure what the laws are, check with a real lawyer. (This is why Bill Gates doesn't own Iowa farm land - he legally can't do what he wants with Iowa farm land)
Replacing the farm manager with an AI multiplies that problem by a hundred. A thousand? A million? A lot. AI may get some sensor data but it's not going to stick its hand in the dirt and say "this feels too dry". It won't hear the weird pinging noise that the tractor's been making and describe it to the mechanic. It may try to hire underlings but, how will it know which employees are working hard and which ones are stealing from it? (Compare Anthropic's experiments with having AI run a little retail store, and get tricked into selling tungsten cubes at a steep discount.)
I got excited when I opened the website and at first had the impression that they'd actually gotten AI to grow something. Instead it's built a website and sent some emails. Not worth our attention, yet.
That's all rich people do. The premise of capitalism is that the people best at collecting rent should also be in total control of resource allocation.
Aren't these companies in the business of leasing land? I dont see how contacting them about leasing land would be spam or bothering them. And I dont really know what you mean by "with no legal authority to actually follow up with what is requested."
Betting millions of dollars in capital on it's decision making process for something it wasn't even designed for and is way more complicated than even I believed coming from a software background into farming is patently ludicrous.
And 5 acres is a garden. I doubt he'll even find a plot to rent at that size, especially this close to seeding in that area.
Let's step back.
"there's a gap between digital and physical that AI can't cross"
Can intelligence of ANY kind, artificial or natural, grow corn? Do physical things?
Your brain is trapped in its skull. How does it do anything physical?
With nerves, of course. Connected to muscle. It's sending and receiving signals, that's all its doing! The brain isn't actually doing anything!
The history of humanity's last 300k years tells you that intelligence makes a difference, even though it isn't doing anything but receiving and sending signals.
An AI that can also plant corn itself (via robots it controls) is much more impressive to me than an AI just send emails.
The estimate seems to leave out a lot of factors, including irrigation, machinery, the literal seeds, and more. $800 for a "custom operator" for 7 months - I don't believe it. Leasing 5 acres of farmable land (for presumably a year) for less than $1400... I don't believe it.
The humans behind this experiment are going to get very tired of reading "Oh, you're right..." over and over - and likely end up deeply underwater.
[1] https://proofofcorn.com/budget
Claude: Go to the owner of the building and say "if you tell me the height of your building I will give you this fine barometer."
The timing might need to be different but it would be good to see what the same amounts invested would yield from corn on the commodity market as well as from securities in farming partnerships.
Would it be fair if AI was used to play these markets too, or in parallel?
It would be interesting to see how different "varieties" of corn perform under the same calendar season.
Corn, nothing but corn as the actual standard of value :)
You don't get much any way you look at it for your $12.99 but it's a start.
Making a batch of popcorn now, I can already smell the demand on the rise :)
1. Do some research (as it's already done)
2. Rent the land and hire someone to grow the corn
3. Hire someone to harvest it, transport it, and store it
4. Manage to sell it
Doing #1 isn't terribly exciting - it's well established that AIs are pretty good at replacing an hour of googling - but if it could run a whole business process like this, that'd be neat.
But,
"I will buy fucking land with an API via my terminal"
Who has multiple millions of dollars to drop on an experiment like that?
Ok then Seth is missing the point of the challenge: Take over the role of the farmhand.
> Everyone is working to try to automate the farmhand out of a job, but the novelty here is the thinking that it is actually the farmer who is easiest to automate away.
Everyone knows this. There is nothing novel here. Desk jockeys who just drive computers all day (the Farmer in this example) are _far_ easier to automate away than the hands-on workers (the farmhand). That’s why it would be truly revolutionary to replace the farmhand.
Or, said another way: Anything about growing corn that is “hands on” is hard to automate, all the easy to automate stuff has already been done. And no, driving a mouse or a web browser doesn’t count as “hands on”.
To be fair, all the stuff that hasn't been automated away is the same in all cases, farmer and farmhand alike: Monitoring to make sure the computer systems don't screw up.
The bet here is that LLMs are past the "needs monitoring" stage and can buy a multi-million dollar farm, along with everything else, without oversight and Seth won't be upset about its choices in the end. Which, in fairness, is a more practical (at least less risky form a liability point of view) bet than betting that a multi-million dollar X9 without an operator won't end up running over a person and later upside-down in the ditch.
He may have many millions to spend on an experiment, but to truly put things to the test would require way more than that. Everyone has a limit. An MVP is a reasonable start. v2 can try to take the concept further.
I would have to look up farm services. Look up farmhand hiring services. Write a couple emails. Make a few payments. Collect my corn after the growing season. That's not an insurmountable amount of effort. And if we don't care about optimizing cost, it's very easy.
Also, how will Claude monitor the corn growing, I'm curious. It can't receive and respond to the emails autonomously so you still have to be in the loop
To make this a full AI experiment, emails to this inbox should be fielded by Claude as well.
(And if you read the linked post, … like this value function is established on a whim, with far less thought than some of the value-functions-run-amok in scifi…)
(and if you've never played it: https://www.decisionproblem.com/paperclips/index2.html )
"Thinking quickly, Dave constructs a homemade megaphone, using only some string, a squirrel, and a megaphone."
This seems like something along the lines of "We know we can use Excel to calculate profit/loss for a Mexican restaurant, but will it work for a Tibetan-Indonesian fusion restaurant? Nobody's ever done that before!"
I'll be following along, and I'm curious what kind of harness you'll put on TOP of Claude code to avoid it stalling out on "We have planted 16/20 fields so far, and irrigated 9/16. Would you like me to continue?"
I'd also like to know what your own "constitution" is regarding human oversight and intervention. Presumably you wouldn't want your investment to go down the drain if Claude gets stuck in a loop, or succumbs to a prompt injection attack to pay a contractor 100% of it's funds, or decides to water the fields with Brawndo.
How much are you allowing yourself to step in, and how will you document those interventions?
So this is a very legitimate test. We may learn some interesting ways that planting, growing, harvesting, storing, and selling corn can go wrong.
I certainly wouldn't expect to make money on my first or second try!
https://www.youtube.com/watch?v=IflNUap2HME
I've been rather expecting AI to start acting as a manager with people as its arms in the real world. It reminds me of the Manna short story[1], where it acts as a people manager with perfect intelligence at all times, interconnected not only with every system but also with other instances in other companies (e.g. for competitive wage data to minimize opex / pay).
1. https://marshallbrain.com/manna1
I’m guessing this will screw up in assuming infinite labor & equipment liqudity.
Pure dystopia.
The endless complaining and goalposting shifting is exhausting
There's no goalpost shifting here - it's l'art pour l'art at its finest. It'd be introducing an agent where no additional agent agent is required in the first place, i.e. telling a farmer how to do their job, when they already now how to and do it in the first place.
No one needs an LLM if you can just lease some land and then tell some person to tend to it, (i.e. doing the actual work). It's baffling to me how out of touch with reality some people are.
Want to grow corn? Take some corn, put it in the ground in your backyard and harvest when it's ready. Been there, done that, not a challenge at all. Want to do it at scale? Lease some land, buy some corn, contract a farmer to till the land, sow the corn, and eventually harvest it. Done. No LLM required. No further knowledge required. Want to know when the best time for each step is? Just look at when other farmers in the area are doing it. Done.
So, where are the exact logs of the prompts and responses to Claude? Under "/log" I do not see this.
The point could be made by having it design and print implements for an indoor container grow and then run lights and water over a microcontroller. Like Anthropic's vending machine this would also be an already addressed, if not solved, space for both home manufacturing and ag/garden automation.
It'd still be novel to see an LLM figure it out from scratch step by step, and a hell of a lot more interesting than whatever the fuck this is. Googling farmland in Iowa or Texas and then writing instructions for people to do the actual work isn't novel or interesting; of course an LLM can write and fill out forms. But the end result still primarily relies on people to execute those forms and affect the world, invalidating the point. Growing corn would be interesting, project managing corn isn't.
I do not have a positive impression/experience of most middle/low level management in corporate world. Over 30 years in the workforce, I've watched it evolve to a "secretary/clerk, usually male, who agrees to be responsible for something they know little about or not very good at doing, pretend at orchestrating".
Like growing corn, lots of literature has been written about it. So models have lots to work with and synthesize. Why not automate the meetings and metric gatherings and mindless hallucinations and short sighted decisions that drone-ish be-like-the-other-manager people do?
https://pluralistic.net/2025/12/05/pop-that-bubble/
Unequivocally awful
"Stop staring at screens"
"Stop sitting at your desk all day"
"Stop loafing around contributing nothing just sending orders from behind a computer"
"Touch grass"
but now that the humans are finally gonna get out and DO something you're outraged
We feed it the information as a context to help us make a plan or strategy to achieve or get something.
They are also doing the same. They will be feeding the sensor, weather and other info, so claude can give them plan to execute.
Ultimately, they need to execute everything.
I have zero doubt Claude is going to do what AI does and plough forward. Emails will get sent, recommendations made, stuff done.
And it will be slop. Worse than what it does with code, the outcomes of which are highly correlated with the expertise of the user past a certain point.
Seth wins his point. AI can, via humans giving it permission to do things, affect the world. So can my chaos monkey random script.
Fred should have qualified: _usefully_ affect the world. Deliver a margin of Utility.
We’re miles off that high bar.
Disclosure: all in on AI
Look up precision ag.
Anyway, turned it off; sure enough, misaligned.
https://autonomousforest.org/map
But where is the prompt or api calls to Claude? I can't see that in the repo
Or did Claude generate the code and repo too? And there is a separate project to run it
[0] https://www.imdb.com/title/tt1112115/
[0]https://www.imdb.com/title/tt10541088/
I mean, more or less, but you see what I'm getting at.
Most food is picked by migrant laborers, not machines.
Seriously, what does this prove? The AI isn't actually doing anything, it's just online shopping basically. You're just going to end up paying grocery store prices for agricultural quantities of corn.
This of course will never happens so instead those in power will continue to try to shoehorn AI into making slaves which is what they want, but not the ideal usage for AI.
1) context: lack of sensors and sensor processing, maybe solvable with web cams in the field but manual labor required for soil testing etc
2)Time bias: orchestration still has a massive recency bias in LLMs and a huge underweighting of established ground truth. Causing it to weave and pivot on recent actions in a wobbly overcorrecting style.
3) vagueness: by and large most models still rely on non committal vagueness to hide a lack of detailed or granular expertise. This granular expertise tends to hallucinate more or just miss context more and get it wrong.
I’m curious how they plan to overcome this. It’s the right type of experiment, but I think too ambitious of a scale.
If people are involved then it's not an autonomous system. You could replace the orchestrator with the average logic defined expert system. Like come on, farming AGVs have come a long way, at least do it properly.
Claude: Oh. My. God.
They're (very impressive) next word predictors. If you ask it 'is it time to order more seeds?' and the internet is full of someone answering 'no' - that's the answer it will provide. It can't actually understand how many there currently are, the season, how much land, etc, and do the math itself to determine whether it's actually needed or not.
You can babysit it and engineer the prompts to be as leading as possible to the answer you want it to give - but that's about it.
The worlds most impressive stochastic parrot, resulting from billions of dollars of research by some of the world's most advanced mathematicians and computer scientists.
And capable of some very impressive things. But pretending their limitations don't exist doesn't serve anyone.