Not OP, but in my experience, Jest and Playwright are so much faster that it's not worth doing much with the MCP. It's a neat toy, but it's just too slow for an LLM to try to control a browser using MCP calls.
Yeah I think it would be better to just have the model write out playwright scripts than the way it's doing it right now (or at least first navigate manually and then based on that, write a playwright typescript script for future tests).
Cuz right now it's way too slow... perform an action, then read the results, then wait for the next tool call, etc.
This is basically our approach with Herd[0]. We operate agents that develop, test and heal trails[1, 2], which are packaged browser automations that do not require browser use LLMs to run and therefore are much cheaper and reliable. Trail automations are then abstracted as a REST API and MCP[3] which can be used either as simple functions called from your code, or by your own agent, or any combination of such.
You can build your own trails, publish them on our registry, compose them ... You can also run them in a distributed fashion over several Herd clients where we take care of the signaling and communication but you simply call functions. The CLI and npm & python packages [4, 5] might be interesting as well.
Note: The automation stack is entirely home-grown to enable distributed orchestration, and doesn't rely on puppeteer nor playwright but the browser automation API[6] is relatively similar to ease adoption. We also don't use the Chrome Devtools Protocol and therefore have a different tradeoff footprint.
> or at least first navigate manually and then based on that, write a playwright typescript script for future tests
This has always felt like a natural best use for LLMs - let them "figure something out" then write/configure a tool to do the same thing. Throwing the full might of an LLM every time you're trying to do something that could be scriptable is a massive waste of compute, not to mention the inconsistent LLM output.
Exactly this. I’ve spent some time last week at a 50 something people web agency helping them setup QA process where agents explore the paths and based on those passes write automated scripts that humans verify and put into testing flow.
This has absolutely nothing in common with a model for computer use... This uses pre-defined tools provided in the MCP server by Google, nothing to do with a general model supposed to work for any software.
> ...the task is just to "solve today's Wordle", and as a web browsing robot, I cannot actually see the colors of the letters after a guess to make subsequent guesses. I can enter a word, but I cannot interpret the feedback (green, yellow, gray letters) to solve the puzzle.
I was concerned there might be sensitive info leaked in the browserbase video at 0:58 as it shows a string of characters in the browser history:
nricy.jd t.fxrape oruy,ap. majro
3 groups of 8 characters, space separated followed by 5 for a total of 32 characters. Seemed like text from a password generator or maybe an API key? Maybe accidentally pasted into the URL bar at one point and preserved in browser history?
I asked ChatGPT about it and it revealed
Not a password or key — it’s a garbled search query typed with the wrong keyboard layout.
If you map the text from Dvorak → QWERTY,
nricy.jd t.fxrape oruy,ap. majro → “logitech keyboard software macos”.
Is this as impressive as it initially seems though? A Bing search for the text shows up some Web results for Dvorak to QWERTY conversion, I think because the word ‘t.fxrape’ (keyboard) hits. So there’s a lot of good luck happening there.
Here's the chat session - you can expand the thought process and see that it tried a few things (hands misaligned with the keyboard for example) before testing the Dvorak keyboard layout idea.
I also found it interesting that despite me suggesting it might be a password generator or API key, ChatGPT doesn't appear to have given that much consideration.
Interesting that they're allowing Gemini to solve CAPTCHAs because OpenAI's agent detects and forces user-input for CAPTCHAs despite being fully able to solve them
Just a matter of time until they lose customer base to other AI tools. Why would I waste my time when the AI is capable to do, and forces me to do unnecessary work. Same as Claude, can’t even draft an email in gmail, too afraid to type…
Knowing it's technically possible is one thing, but giving it a short command and seeing it go log in to a site, scroll around, reply to posts, etc. is eerie.
Also it tied me at wordle today, making the same mistake I did on the second to lass guess. Too bad you can't talk to it while it's working.
I believe it will need very capable but small VLMs that understand common User Interfaces very well -- small enough to run locally -- paired with any other higher level models on the cloud, to achieve human-speed interactions and beyond with reliability.
Many years ago I was sitting at a red light on a secondary road, where the primary cross road was idle. It seemed like you could solve this using a computer vision camera system that watched the primary road and when it was idle, would expedite the secondary road's green light.
This was long before computer vision was mature enough to do anything like that and I found out that instead, there are magnetic systems that can detect cars passing over - trivial hardware and software - and I concluded that my approach was just far too complicated and expensive.
Similarly, when I look at computers, I typically want the ML/AI system to operate on a structured data that is codified for computer use. But I guess the world is complicated enough and computers got fast enough that having an AI look at a computer screen and move/click a mouse makes sense.
Ironically now that computer vision is commonplace, the cameras you talk about have become increasingly popular over the years because the magnetic systems do not do a very good job of detecting cyclists and the cameras double as a congestion monitoring tool for city staff.
It's been happening in the USA for quite a long time.
Anecdotally, the small city I grew up in, in Ohio (USA), started using cameras and some kind of computer vision to operate traffic signals 15 or 20 years ago, replacing inductive loops.
I used to hang out sometimes with one of the old-timers who dealt with it as part of his long-time street department job. I asked him about that system once (over a decade ago now) over some drinks.
"It doesn't fuckin' work," I remember him flatly telling me before he quite visibly wanted to talk about anything other than his day job.
The situation eventually improved -- presumably, as bandwidth and/or local processing capabilities have also improved. It does pretty well these days when I drive through there, and the once-common inductive loops (with their tell-tale saw kerfs in the asphalt) seem to have disappeared completely.
(And as a point of disambiguation: They are just for controlling traffic lights. There have never been any speed or red light cameras in that city. And they're distinctly separate from traffic preemption devices, like the Opticom system that this city has used for an even longer time.)
---
As a non-anecdotal point of reference, I'd like to present an article from ~20 years ago about a system in a different city in the US that was serving a similar function at that time:
Yes. I can't speak to the USA, as I'm from Canada, but I've had conversations with traffic engineers from another city about it and increasingly seen them in my own city. Here's an example of one of the systems: https://www.iteris.com/oursolutions/pedestrian-cyclist-safet...
They're obviously more common in higher density areas with better cycling infrastructure. The inductive loops are effectively useless with carbon fibre bicycles especially, so these have been a welcome change. But from what I was told these also are more effective for vehicle traffic than the induction loops as drivers often come to a stop too far back to be detected, plus these also allow conditional behaviour based on the number of vehicles waiting and their lanes (which can all be changed without ripping up the road).
> The system applies artificial intelligence to traffic signals equipped with cameras or radars adapting in realtime to dynamic traffic patterns of complex urban grids, experienced in neighborhoods like East Liberty in the City of Pittsburgh
Now, that said, I have serious issues with that system: It seemed heavily biased to vehicle throughput over pedestrians, and it's not at all clear that it was making the right long-term choice as far as the incentives it created. But it _was_ cameras watching traffic to influence signaling.
Those cameras aren’t usually easily or cheaply adapted to surveillance. Most are really simple and don’t have things like reliable time sync. Also, road jurisdictions are really complex and surveillance requires too much coordination. State, county, town, city all have different bureaucratic processes and funding models.
Surveillance is all about Flock. The feds are handing out grants to everyone, and the police drop the things everywhere. They can locate cars, track routine trips, and all sorts of creepy stuff.
In my city, cameras for traffic light control are on almost every signalized intersection, and the video is public record and frequently used to review collisions. These cameras are extremely cheaply and easily adapted to surveillance. Public records are public records statewide.
With all due respect, you are kidding yourself if you think those cameras aren’t used for surveillance/ logging
They don’t have to be “adapted” to surveillance - they are made with that in mind
Obviously older generations of equipment aren’t included here - so technically you may be correct for old/outdated equipment installed areas that aren’t of interest
Until the politicians want to come after you for posts you make on HN, or any other infraction they decide is now an issue.
History is littered with the literal bones of people who thought they had nothing to fear from the state. The state is not your friend, and is not looking out for you.
go watch any movie about a panopticon for the (overdiscussed) side-effects of a surveillance state.
Fiction works, but if you want to spend the evening depressed then go for any East/West Germany (true) stories.
For it or against surveillance and I can understand, but just not understanding the issue? No excuses -- personal surveillance for the sake of the state is one of the most discussed social concepts in the world.
It was my first engineering job, calibrating those inductive loops and circuit boards on I-93, just north of Boston's downtown area. Here is the photo from 2006. https://postimg.cc/zbz5JQC0
PEEK controller, 56K modem, Verizon telco lines, rodents - all included in one cabinet
I cycle a lot. Outdoors I listen to podcasts and the fact that I can say "Hey Google, go back 30sec" to relisten to something (or forward to skip ads) is very valuable to me.
Indoors I tend to cast some show or youtube video. Often enough I want to change the Youtube video or show using voice commands - I can do this for Youtube, but results are horrible unless I know exactly which video I want to watch. For other services it's largely not possible at all
In a perfect world Google would provide superb APIs for these integrations and all app providers would integrate it and keep it up to date. But if we can bypass that and get good results across the board - I would find it very valuable
I understand this is a very specific scenario. But one I would be excited about nonetheless
Do you have a lot of dedicated cycle ways? I'm not sure I'd want to have headphones impeding my hearing anywhere I'd have to interact with cars or pedestrians while on my bike.
Lots of noise cancelling headphones have a pass-through mode that lets you hear the outside world. Alternatively, I use bone conducting headphones that leave my ears uncovered.
yes i bike on chicago lakefront up and down is like 40 miles for me.
also biking on roads you should never count on sounds to guide you. you should always use vision. for example, making a left you have to visually establish that driver coming straight has made eye contact with you or atleast looked at you.
can you share a example of how you are using sound to help you ride bikes with other vehicles on the road? are you maybe talking about honking? that. you will hear over podcasts.
The sound of a revving engine is often the first warning you have that someone is about to pass you and especially how they handle it is a good sign of how likely they are to attempt a close pass rather than overtake in the legal manner with the minimum distance.
I recently spent some time in a country house far enough from civilization that electric lines don’t reach. The owners could have installed some solar panels, but they opted to keep it electricity-free to disconnect from technology, or at least from electronics. They have multiple decades old ingenious utensils that work without electricity, like a fridge that uses propane, oil lamps, non-electric coffee percolator, etc. and that made me wonder, how many analogous devices stopped getting invented because an electric device is the most obvious way of solving things to our current view.
My town solved this at night by putting simple light sensors on the traffic lights so as you approach you can flash ur brights at it and it triggers a cycle.
Otherwise the higher traffic road got a permanent green light at nighttime until it saw high beams or magnetic flux from a car reaching the intersection.
Computer use is the most important AI benchmark to watch if you're trying to forecast labor-market impact. You're right, there are much more effective ways for ML/AI systems to accomplish tasks on the computer. But they all have to be hand-crafted for each task. Solving the general case is more scalable.
Not the current benchmarks, no. The demos in this post are so slow. Between writing the prompt, waiting a long time and checking the work I’d just rather do it myself.
For instance: I do periodic database-level backups of a very closed-source system at work. It doesn't take much of my time, but it's annoying in its simplicity: Run this GUI Windows program, click these things, select this folder, and push the go button. The backup takes as long as it takes, and then I look for obvious signs of either completion or error on the screen sometime later.
With something like this "Computer Use" model, I can automate that process.
It doesn't matter to anyone at all whether it takes 30 seconds or 30 minutes to walk through the steps: It can be done while I'm asleep or on vacation or whatever.
I can keep tabs on it with some combination of manual and automatic review, just like I would be doing if I hired a real human to do this job on my behalf.
(Yeah, yeah. There's tons of other ways to back up and restore computer data. But this is the One, True Way that is recoverable on a blank slate in a fashion that is supported by the manufacturer. I don't get to go off-script and invent a new method here.
But a screen-reading button-clicker? Sure. I can jive with that and keep an eye on it from time to time, just as I would be doing if I hired a person to do it for me.)
The camera systems are also superior from an infrastructure maintenance perspective. You can update them with new capabilities or do re-striping without tearing up the pavement.
If I read the web page, they don't actually use that as a solution to shortening a red - IMHO that has a very high safety bar compared to the more common uses. But I'd be happy to hear this is something that Just Works in the Real World with a reasonable false positive and false negative rate.
> But I guess the world is complicated enough and computers got fast enough that having an AI look at a computer screen and move/click a mouse makes sense.
It's not that the world is particularly complicated here - it's just that computing is a dynamic and adversarial environment. End-user automation consuming structured data is a rare occurrence not because it's hard, but because it defeats pretty much every way people make money on the Internet. AI is succeeding now because it is able to navigate the purposefully unstructured and obtuse interfaces like a person would.
There is a lot of pretraining data available around screen recordings and mouse movements (Loom, YouTube, etc). There is much less pretraining data available around navigating accessibility trees or DOM structures. Many use cases may also need to be image aware (document scan parsing, looking at images), and keyboard/video/mouse-based models generalize to more applicants.
It's funny I'll sometimes scoot forward/rock my car but I'm not sure if it's just coincidence. Also a lot of stop lights now have that tall white camera on top.
There's several mechanisms. The most common is (or at least was) a loop detector under the road that triggers when a vehicle is over it. Sometimes if you're not quite over it, or it's somewhat faulty that will trigger it.
I just have to say that I consider this an absolutely hilarious outcome. For many years, I focused on tech solutions that eliminated the need for a human to be in front of a computer doing tedious manual operations. For a wide range of activities, I proposed we focus on "turning everything in the world into database objects" so that computers could operate on them with minimal human effort. I spent significant effort in machine learning to achieve this.
It didn't really occur to me that you could just train a computer to work directly on the semi-structured human world data (display screen buffer) through a human interface (mouse + keyboard).
However, I fully support it (like all the other crazy ideas on the web that beat out the "theoretically better" approaches). I do not think it is unrealistic to expect that within a decade, we could have computer systems that can open chrome, start a video chat with somebody, go back and forth for a while to achieve a task, then hang up... with the person on the other end ever knowing they were dealing with a computer instead of a human.
AI is succeeding where "theoretically better" approaches failed, because it addresses the underlying social problem. The computing ecosystem is an adversarial place, not a cooperative one. The reason we can't automate most of the tedium is by design - it's critical to how almost all money is made on the Internet. Can't monetize users when they automate your upsell channels and ad exposure away.
I saw similar discussions around robotics, people saying "why are they making the robots humanoid? couldn't they be a more efficient shape" and it comes back to the same thing where if you want the tool to be adopted then it has to fit in a human-centric world no matter how inefficient that is.
high performance applications are still always custom designed and streamlined, but mass adoption requires it to fit us not us to fit it.
I was thinking about that last point in the context of dating this morning, if my "chatgpt" knew enough about me to represent me well enough that a dating app could facilitate a pre-screening with someone else "chatgpt", that would be interesting. I heard someone in an enterprise keynote recently talking about "digital twins" - I believe this is that. Not sure what I think about it yet generally, or where it leads.
> we could have computer systems that can open chrome, start a video chat with somebody, go back and forth for a while to achieve a task, then hang up... with the person on the other end ever knowing they were dealing with a computer instead of a human.
Doesn't that...seem bad?
I mean, it would certainly be a monumental and impressive technical accomplishment.
The main reason you might not know if it is a human or not is that the human interactions are so bad (eg help desk call, internet provider, any utility, even the doctor’s office front line non-medical staff).
Really feels like computer use models may be vertical agent killers once they get good enough. Many knowledge work domains boil down to: use a web app, send an email. (e.g. recruiting, sales outreach)
>This will never hit a production enterprise system without some form of hooks/callbacks in place to instill governance.
knowing how many times Claude Code breezed through a hook call and threw it away after actually computing the hook for an answer and then proceeding to not integrate the hook results ; I think the concept of 'governance' is laughable.
LLMs are so much further from determinism/governance than people seem to realize.
I've even seen earlier CC breeze through a hook that ends with a halting test failure and "DO NOT PROCEED" verbage. The only hook that is guaranteed to work on call is a big theoretical dangerous claude-killing hook.
Disclaimer: Im a cofounder, we focus critical spaces with AI. Also i was the feature request for claude code hooks.
But my bet - we will not deploy a single agent into any real environment without deterministic guarantees. Hooks are a means...
Browserbase with hooks would be really powerful, governance beyond RBAC (but of course enabling relevant guardrailing as well - "does agent have permission to access this sharepoint right now, within this context, to conduct action x?").
I would love to meet with you actually, my shop cares intimately about agent verification and governance. Soon to release the tool I originally designed for claude code hooks.
> It is not yet optimized for desktop OS-level control
Alas, AGI is not yet here. But I feel like if this OS-level of control was good enough, and the cost of the LLM in the loop wasn't bad, maybe that would be enough to kick start something akin to AGI.
It’s the same old superiority complex that birthed the “IT Guy” stereotypes of the 90s/aughts. It stems from a) not understanding what problems non-developers need computers to solve for them, and b) ego-driven overestimation of the complexity of their field compared to others.
We should be very specific and careful with our words. pseidemann said "most humans cannot properly control a computer", which isn't the same as "most people are incapable of using a computer".
I would agree with pseidemann. There's a level of understanding and care and focus that most people lack. That doesn't make those people less worthy of love and care and support, and computers are easier to use than ever. Most people don't know what EFI is, nor should they have to. If all someone needs from the computer to be able to update their facebook, the finer details of controlling a computer aren't, and shouldn't be important to them, and that's okay!
Humanity's goal should have been to make the smartest human possible, but no one got the memo, so we're raising the bar by augmenting everyone with technology instead of implementing eugenics programs.
One thought is once it can touch the underlying system, it can provision resources, spawn processes, and persist itself, crossing the line from tool to autonomous entity. I admit you could do that in a browser shell nowadays, just maybe with more restrictions and guardrails.
I don’t have any strong opinions here, but I do think a lower cost to escape the walled gardens agi starts in will be a factor
Not just the os, but browsing control is enough to do 99% of the things he would want autonomously.
Bank account + id + browser: has all the tools it needs to do many things:
- earn money
- allocate money
- create accounts
- delegate physical jobs to humans
Create his own self loop in a server. Create a server account, use credit card + id provided, self host his own code… can now focus on getting more resources.
The rendered visual layout is designed in a way to be spatially organized perceptually to make sense. It's a bit like PDFs. I imagine that the underlying hierarchy tree can be quite messy and spaghetti, so your best bet is to use it in the form that the devs intended and tested it for.
I think screenshots are a really good and robust idea. It bothers the more structured-minded people, but apps are often not built so well. They are built until the point that it looks fine and people are able to use it. I'm pretty sure people who rely on accessibility systems have lots of complaints about this.
The progressives were pretty good at pushing accessibility in applications, it's not perfect but every company I've worked with since the mid 2010s has made a big todo about accessibility. For stuff on linux you can instrument observability in a lot of different ways that are more efficient than screenshots, so I don't think it's generally the right way to move forward, but screenshots are universal and we already have capable vision models so it's sort of a local optimization move.
I think I'll make that my equivalent of Simon Willison's "pelican riding a bicycle" test. It is fairly simple to explain but seems to trip up different LLMs in different ways.
My general experience has been that Gemini is pretty bad at tool calling. The recent Gemini 2.5 Flash release actually fixed some of those issues but this one is Gemini 2.5 Pro with no indication about tool calling improvements.
Interesting, seems to use 'pure' vision and x/y coords for clicking stuff. Most other browser automation with LLMs I've seen uses the dom/accessibility tree which absolutely churns through context, but is much more 'accurate' at clicking stuff because it can use the exact text/elements in a selector.
Unfortunately it really struggled in the demos for me. It took nearly 18 attempts to click the comment link on the HN demo, each a few pixels off.
At current latency, there's a bunch of async automation usecases that one could use this for. For example:
*Tedious to complete, easy to verify
BPO: Filling out doctor licensing applications based on context from a web profile
HR: Move candidate information from an ATS to HRIS system
Logistics: Fill out a shipping order form based on a PDF of the packing label
* Interact with a diverse set of sites for a single workflow
Real estate: Diligence on properties that involves interacting with one of many county records websites
Freight forwarding: Check the status of shipping containers across 1 of 50 port terminal sites
Shipping: Post truck load requests across multiple job board sites
BPO: Fetch the status of a Medicare coverage application from 1 of 50 state sites
BPO: Fill out medical license forms across multiple state websites
* Periodic syncs between various systems of record
Clinical: Copy patient insurance info from Zocdoc into an internal system
HR: Move candidate information from an ATS to HRIS system
Customer onboarding: Create Salesforce tickets based on planned product installations that are logged in an internal system
Logistics: Update the status of various shipments using tracking numbers on the USPS site
* PDF extraction to system interaction
Insurance: A broker processes a detailed project overview and creates a certificate of insurance with the specific details from the multi-page document by filling out an internal form
Logistics: Fill out a shipping order form based on a PDF of the packing label
Clinical: Enter patient appointment information into an EHR system based on a referral PDF
Accounting: Extract invoice information from up to 50+ vendor formats and enter the details into a Google sheet without laborious OCR setup for specific formats
Mortgage: Extract realtor names and address from a lease document and look up the license status on various state portals
I would love to use this for E2E testing. It would be great to make all my assertions with high level descriptions so tests are resilient to UI changes.
Seems similar to the Amazon Nova Act API which is still in research preview.
No easy answers on this one unfortunately, lots of conversations ongoing on these - but our default stance has been to hand back control to the user in cases of captcha and have them solve these when they arise.
The irony is that most of tech companies make their money by forcing users to wade through garbage. For example, if you could browse the internet and avoid ads, why wouldn't you? If you could choose what twitter content to see outside of their useless algorithms, why wouldn't you?
Why do you think we have fully self driving cars instead of just more simplistic beacon systems? Why doesn't McDonald's have a fully automated kitchen?
New technology is slow due to risk aversion, it's very rare for people to just tear up what they already have to re-implement new technology from the ground up. We always have to shoe-horn new technology into old systems to prove it first.
There are just so many factors that get solved by working with what already exists.
About your self-driving car point, I feel like the approach I'm seeing is akin to designing a humanoid robot that uses its robotic feet to control the brake and accelerator pedals, and its hand to move the gear selector.
Yeah, that would be pretty good honestly. It could immediately upgrade every car ever made to self driving and then it could also do your laundry without buying a new washing machine and everything else. It's just hard to do. But it will happen.
Yes, it sounds very cool and sci-fi, but having a humanoid control the car seems less safe than having the spinning cameras and other sensors that are missing from older cars or those that weren't specifically built to be self-driving. I suppose this is why even human drivers are assisted by automatic emergency braking.
I am more leaning into the idea that an efficient self-driving car wouldn't even need to have a steering wheel, pedals, or thin pillars to help the passengers see the outside environment or be seen by pedestrians.
The way this ties back to the computer use models is that a lot of webpages have stuff designed for humans would make it difficult for a model to navigate them well. I think this was the goal of the "semantic web".
Every website and application is just layers of data. Playwright and similar tools have options for taking Snapshots that contain data like text, forms, buttons, etc that can be interacted with on a site. All the calls a website makes are just APIs. Even a native application is made up of WinForms that can be inspected.
Ah, so now you're turning LLMs into web browsers capable of parsing Javascript to figure out what a human might be looking at, let's see how many levels deep we can go.
Just inspect the memory content of the process. It's all just numbers at the end of the day & algorithms do not have any understanding of what the numbers mean other than generating other numbers in response to the input numbers. For the record I agree w/ OP, screenshots are not a good interface for the same reasons that trains, subways, & dedicates lanes for mass transit are obviously superior to cars & their associated attendant headaches.
Maybe some day, sure. We may eventually live in a utopia where everyone has quick, efficient, accessible mass transit available that allows them to move between any two points on the globe with unfettered grace.
That'd be neat.
But for now: The web exists, and is universal. We have programs that can render websites to an image in memory (solved for ~30 years), and other programs that can parse images of fully-rendered websites (solved for at least a few years), along with bots that can click on links (solved much more recently).
Point was process memory is the source of truth, everything else is derived & only throws away information that a neural network can use to make better decisions. Presentation of data is irrelevant to a neural network, it's all just numbers & arithmetic at the end of the day.
We can build tons of infrastructure for cars that didn't exist before but can't for other things anymore? Seems like society is just becoming lethargic.
In my country there's a multi-airline API for booking plane tickets, but the cheapest of economy carriers only accept bookings directly on their websites.
If you want to make something that can book every airline? Better be able to navigate a website.
Except if its a messy div soup with various shitty absolute and relative pixel offsets where the only way to know what refers to what is by rendering it and using gestalt principles.
It does, because it's hard to infer where each element will end up in the render. So a checkbox may be set up in a shitty way such that the corresponding text label is not properly placed in the DOM, so it's hard to tell what the checkbox controls just based on the DOM tree. You have to take into account the styling and placement pixel stuff, ie render it properly and look at it.
That's just one obvious example, but the principle holds more generally.
Spatial continuity has nothing to do w/ how neural networks interpret an array of numbers. In fact, there is nothing about the topology of the input that is any way relevant to what calculations are done by the network. You are imposing an anthropomorphic structure that does not exist anywhere in the algorithm & how it processes information. Here is an example to demonstrate my point: https://x.com/s_scardapane/status/1975500989299105981
It would have to implicitly render the HTML+CSS to know which two elements visually end up next to each other, if the markup is spaghetti and badly done.
It's not ridiculous if you understand how neural networks actually work. Your perception of the numbers has nothing to do w/ the logic of the arithmetic in the network.
The original comment I replied to said "You can navigate a website without visually decoding the image of a website." I replied that decoding is necessary to know where the elements will end up in a visual arrangement, because often that carries semantics. A label that is rendered next to another element can be crucial for understanding the functioning of the program. It's nontrivial just from the HTML or whatever tree structure where each element will appear in 2D after rendering.
2D rendering is not necessary for processing information by neural networks. In fact, the image is flattened into 1D array & loses the topological structure almost entirely b/c the topology is not relevant to the arithmetic performed by the network.
I'm talking about HTML (or other markup, in the form of text) vs image. That simply getting the markup as text tokens will be much harder to interpret since it's not clear where the elements will end up. I guess I can't make this any more clear.
It's on the brand of stuff that works. Expert systems and formal symbolic if-else, rules based reasoning was tried, it failed. Real life is messy and fat-tailed.
Yes, and here they also operate deterministic GUI tools. Thing is, many GUI programs are not designed so well. Their best interface and the only interface they were tested and designed for is the visual one.
What you say is 100% true until it’s not. It seems like a weird thing to say (what I’m saying), but please consider we’re in a time period where everything we say is true, minute by minute, and no more. It could be the next version of this just works, and works really well.
“The Gemini 2.5 Computer Use model is primarily optimized for web browsers, but also demonstrates strong promise for mobile UI control tasks. It is not yet optimized for desktop OS-level control.”
How big are Gemini 2.5(Pro/Flash/Lite) models in parameter counts, in experts' guesstimation? Is it towards 50B, 500B, or bigger still? Even Flash feels smart enough for vibe coding tasks.
I think it’s related that I got an email from google, titled “ Simplifying your Gemini Apps experience”. It reads no privacy maximize AI. They are going to automatically collect data from all google apps, and users no longer have options to control access to individual apps.
The programmable web is a dead 20 year old dream that was crushed by the evil tech monopolists, Facebook, Google, etc. This emerging llm based automation tech is a glimmer of hope that we will be able to regain our data and autonomy.
We can definitely make the docs more clear here but the model requires using the computer_use tool. If you have custom tools, you'll need to exclude predefined tools if they clash with our action space.
It is actually quite good at following instructions, but I tried clicking on job application links, and since they open in a new window, it couldn't find the new window. I suppose it might be an issue with BrowserBase, or just the way this demo was set up.
I'm so looking forward to it. Many of the problems that should be trivially solved with either AI or a script are hard to impossible to solve because the data is locked away in some form.
Having an AI handle this may be inefficient, but as it uses the existing user interfaces, it might allow bypassing years of bureaucracy, and when the bureaucracy tries to fight it to justify its existence, it can fight it out with the EVERYONE MUST USE AI OR ELSE layers of management, while I can finally automate that idiotic task (using tens of kilowatts rather than a one-liner, but still better than having to do it by hand).
There are some absolutely atrocious UIs out there for many office workers, who spend hours clicking buttons opening popup after popup clicking repetitively on checkboxes etc. E.g. entering travel costs or somesuch in academia and elsewhere. You have no idea how annoying that type of work is, you pull out your hair. Why don't they make better UIs, you ask? If you ask, you have no idea how bad things are. Because they don't care, there is no communication, it seems fine, the software creators are hard to reach, the software is approved by people who never used it and decide based on gut feel, powerpoints and feature tickmarks. Even big name brands are horrible at this, like SAP.
If such AI tools allow to automate this soulcrushing drudgery, it will be great. I know that you can technically script things Selenium, AutoHotkey whatnot. But you can imagine that it's a nonstarter in a regular office. This kind of tool could make things like that much more efficient. And it's not like it will then obviate the jobs entirely (at least not right away). These offices often have immense backlogs and are understaffed as is.
It matters a lot for E2E testing. I would totally replace the ease of the AI solution for a faster, more complicated one if it starts impacting build times.
Few things are more frustrating for a team than maintaining a slow E2E browser test suite.
I prepare to be disappointed every time I click on a Google AI announcement. Which is so very unfortunate, given that they're the source of LLMs. Come on big G!! Get it together!
Not great at Google Sheets. Repeatedly overwrites all previous columns while trying to populate new columns.
> I am back in the Google Sheet. I previously typed "Zip Code" in F1, but it looks like I selected cell A1 and typed "A". I need to correct that first. I'll re-type "Zip Code" in F1 and clear A1. It seems I clicked A1 (y=219, x=72) then F1 (y=219, x=469) and typed "Zip Code", but then maybe clicked A1 again.
I sure hope this is better than pathetically useless. I assume it is to replace the extremely frustrating Gemini for Android. If I have a bluetooth headset and I try "play music on Spotify" it fails about half the time. Even with youtube music. I could not believe it was so bad so I just sat at my desk with the helmet on and tried it over and over. It seems to recognise the speech but simply fails to do anything. Brand new Pixel 10. The old speech recognition system was way dumber but it actually worked.
I was riding my motorcycle the other day, and asked my helmet to "call <friend>." Gemini infuriatingly replied "I cannot directly make calls for you. Is there something else I can help you with?" This absolutely used to work.
Reminds me of an anecdote where Amazon invested howevermany personlives in building AI for Alexa, only to discover that alarms, music, and weather make up the large majority of things people actually use smart speakers for. They're making these things worse at their main jobs so they can sell the sizzle of AI to investors.
Yes, I am also talking about a Cardo. If it didn't used to work near 100% of the time this time last year it might not be so incredibly annoying, but to go from working to complete crap with no choice to be able to go back to the working system is bad.
It's like google staff are saying "If it means promotion, we don't give a shit about users".
The new Gemini 2.5 model's ability to understand and interact with computer interfaces looks very impressive. It could be a game-changer for accessibility and automation. I wonder how robust it is with non-standard UI elements.
Cuz right now it's way too slow... perform an action, then read the results, then wait for the next tool call, etc.
You can build your own trails, publish them on our registry, compose them ... You can also run them in a distributed fashion over several Herd clients where we take care of the signaling and communication but you simply call functions. The CLI and npm & python packages [4, 5] might be interesting as well.
Note: The automation stack is entirely home-grown to enable distributed orchestration, and doesn't rely on puppeteer nor playwright but the browser automation API[6] is relatively similar to ease adoption. We also don't use the Chrome Devtools Protocol and therefore have a different tradeoff footprint.
0: https://herd.garden
1: https://herd.garden/trails
2: https://herd.garden/docs/trails-automations
3: https://herd.garden/docs/reference-mcp-server
4: https://www.npmjs.com/package/@monitoro/herd
5: https://pypi.org/project/monitoro-herd/
6: https://herd.garden/docs/reference-page
This has always felt like a natural best use for LLMs - let them "figure something out" then write/configure a tool to do the same thing. Throwing the full might of an LLM every time you're trying to do something that could be scriptable is a massive waste of compute, not to mention the inconsistent LLM output.
Stucks with:
> ...the task is just to "solve today's Wordle", and as a web browsing robot, I cannot actually see the colors of the letters after a guess to make subsequent guesses. I can enter a word, but I cannot interpret the feedback (green, yellow, gray letters) to solve the puzzle.
I asked ChatGPT about it and it revealed
Very nice solve, ChatGPT.
https://chatgpt.com/share/68e5e68e-00c4-8011-b806-c936ac657a...
I also found it interesting that despite me suggesting it might be a password generator or API key, ChatGPT doesn't appear to have given that much consideration.
Knowing it's technically possible is one thing, but giving it a short command and seeing it go log in to a site, scroll around, reply to posts, etc. is eerie.
Also it tied me at wordle today, making the same mistake I did on the second to lass guess. Too bad you can't talk to it while it's working.
This was long before computer vision was mature enough to do anything like that and I found out that instead, there are magnetic systems that can detect cars passing over - trivial hardware and software - and I concluded that my approach was just far too complicated and expensive.
Similarly, when I look at computers, I typically want the ML/AI system to operate on a structured data that is codified for computer use. But I guess the world is complicated enough and computers got fast enough that having an AI look at a computer screen and move/click a mouse makes sense.
cameras are being used to detect traffic and change lights? i don't think thats happening in USA.
which country are you referring to here?
Anecdotally, the small city I grew up in, in Ohio (USA), started using cameras and some kind of computer vision to operate traffic signals 15 or 20 years ago, replacing inductive loops.
I used to hang out sometimes with one of the old-timers who dealt with it as part of his long-time street department job. I asked him about that system once (over a decade ago now) over some drinks.
"It doesn't fuckin' work," I remember him flatly telling me before he quite visibly wanted to talk about anything other than his day job.
The situation eventually improved -- presumably, as bandwidth and/or local processing capabilities have also improved. It does pretty well these days when I drive through there, and the once-common inductive loops (with their tell-tale saw kerfs in the asphalt) seem to have disappeared completely.
(And as a point of disambiguation: They are just for controlling traffic lights. There have never been any speed or red light cameras in that city. And they're distinctly separate from traffic preemption devices, like the Opticom system that this city has used for an even longer time.)
---
As a non-anecdotal point of reference, I'd like to present an article from ~20 years ago about a system in a different city in the US that was serving a similar function at that time:
https://www.toacorn.com/articles/traffic-cameras-are-not-spy...
They're obviously more common in higher density areas with better cycling infrastructure. The inductive loops are effectively useless with carbon fibre bicycles especially, so these have been a welcome change. But from what I was told these also are more effective for vehicle traffic than the induction loops as drivers often come to a stop too far back to be detected, plus these also allow conditional behaviour based on the number of vehicles waiting and their lanes (which can all be changed without ripping up the road).
Has been for the better part of a decade. Google `Iteris Vantage` and you will see some of the detection systems.
What US cities have these?
> The system applies artificial intelligence to traffic signals equipped with cameras or radars adapting in realtime to dynamic traffic patterns of complex urban grids, experienced in neighborhoods like East Liberty in the City of Pittsburgh
Now, that said, I have serious issues with that system: It seemed heavily biased to vehicle throughput over pedestrians, and it's not at all clear that it was making the right long-term choice as far as the incentives it created. But it _was_ cameras watching traffic to influence signaling.
https://www.transportation.gov/utc/surtrac-people-upgrading-...
https://en.wikipedia.org/wiki/Scalable_Urban_Traffic_Control
https://deflock.me
Surveillance is all about Flock. The feds are handing out grants to everyone, and the police drop the things everywhere. They can locate cars, track routine trips, and all sorts of creepy stuff.
They don’t have to be “adapted” to surveillance - they are made with that in mind
Obviously older generations of equipment aren’t included here - so technically you may be correct for old/outdated equipment installed areas that aren’t of interest
History is littered with the literal bones of people who thought they had nothing to fear from the state. The state is not your friend, and is not looking out for you.
Fiction works, but if you want to spend the evening depressed then go for any East/West Germany (true) stories.
For it or against surveillance and I can understand, but just not understanding the issue? No excuses -- personal surveillance for the sake of the state is one of the most discussed social concepts in the world.
Might be because pattern on your face or T-shirt match something bad.
And this kind of stuff already happened in UK even before "AI craze". Hundreds of people been imprisoned because of faulty accounting system:
https://en.m.wikipedia.org/wiki/British_Post_Office_scandal
"Computer says you go to prison"!
What if that changes?
Great! Then you don't mind telling us your email password!
https://www.amazon.com/Three-Felonies-Day-Target-Innocent/dp...
PEEK controller, 56K modem, Verizon telco lines, rodents - all included in one cabinet
Indoors I tend to cast some show or youtube video. Often enough I want to change the Youtube video or show using voice commands - I can do this for Youtube, but results are horrible unless I know exactly which video I want to watch. For other services it's largely not possible at all
In a perfect world Google would provide superb APIs for these integrations and all app providers would integrate it and keep it up to date. But if we can bypass that and get good results across the board - I would find it very valuable
I understand this is a very specific scenario. But one I would be excited about nonetheless
also biking on roads you should never count on sounds to guide you. you should always use vision. for example, making a left you have to visually establish that driver coming straight has made eye contact with you or atleast looked at you.
can you share a example of how you are using sound to help you ride bikes with other vehicles on the road? are you maybe talking about honking? that. you will hear over podcasts.
Audio cues are less and less useful as electric vehicles become more popular. (I am a city biker and there are plenty already.)
Otherwise the higher traffic road got a permanent green light at nighttime until it saw high beams or magnetic flux from a car reaching the intersection.
It's about working independently while you do other things.
For instance: I do periodic database-level backups of a very closed-source system at work. It doesn't take much of my time, but it's annoying in its simplicity: Run this GUI Windows program, click these things, select this folder, and push the go button. The backup takes as long as it takes, and then I look for obvious signs of either completion or error on the screen sometime later.
With something like this "Computer Use" model, I can automate that process.
It doesn't matter to anyone at all whether it takes 30 seconds or 30 minutes to walk through the steps: It can be done while I'm asleep or on vacation or whatever.
I can keep tabs on it with some combination of manual and automatic review, just like I would be doing if I hired a real human to do this job on my behalf.
(Yeah, yeah. There's tons of other ways to back up and restore computer data. But this is the One, True Way that is recoverable on a blank slate in a fashion that is supported by the manufacturer. I don't get to go off-script and invent a new method here.
But a screen-reading button-clicker? Sure. I can jive with that and keep an eye on it from time to time, just as I would be doing if I hired a person to do it for me.)
Motorcyclists would conclude that your approach would actually work.
It's not that the world is particularly complicated here - it's just that computing is a dynamic and adversarial environment. End-user automation consuming structured data is a rare occurrence not because it's hard, but because it defeats pretty much every way people make money on the Internet. AI is succeeding now because it is able to navigate the purposefully unstructured and obtuse interfaces like a person would.
It didn't really occur to me that you could just train a computer to work directly on the semi-structured human world data (display screen buffer) through a human interface (mouse + keyboard).
However, I fully support it (like all the other crazy ideas on the web that beat out the "theoretically better" approaches). I do not think it is unrealistic to expect that within a decade, we could have computer systems that can open chrome, start a video chat with somebody, go back and forth for a while to achieve a task, then hang up... with the person on the other end ever knowing they were dealing with a computer instead of a human.
https://en.wikipedia.org/wiki/Hang_the_DJ
Doesn't that...seem bad?
I mean, it would certainly be a monumental and impressive technical accomplishment.
But it still seems...quite bad to me.
Obviously much harder with UI vs agent events similar to the below.
https://docs.claude.com/en/docs/claude-code/hooks
https://google.github.io/adk-docs/callbacks/
knowing how many times Claude Code breezed through a hook call and threw it away after actually computing the hook for an answer and then proceeding to not integrate the hook results ; I think the concept of 'governance' is laughable.
LLMs are so much further from determinism/governance than people seem to realize.
I've even seen earlier CC breeze through a hook that ends with a halting test failure and "DO NOT PROCEED" verbage. The only hook that is guaranteed to work on call is a big theoretical dangerous claude-killing hook.
Do you think callbacks are how this gets done?
But my bet - we will not deploy a single agent into any real environment without deterministic guarantees. Hooks are a means...
Browserbase with hooks would be really powerful, governance beyond RBAC (but of course enabling relevant guardrailing as well - "does agent have permission to access this sharepoint right now, within this context, to conduct action x?").
I would love to meet with you actually, my shop cares intimately about agent verification and governance. Soon to release the tool I originally designed for claude code hooks.
Alas, AGI is not yet here. But I feel like if this OS-level of control was good enough, and the cost of the LLM in the loop wasn't bad, maybe that would be enough to kick start something akin to AGI.
I would agree with pseidemann. There's a level of understanding and care and focus that most people lack. That doesn't make those people less worthy of love and care and support, and computers are easier to use than ever. Most people don't know what EFI is, nor should they have to. If all someone needs from the computer to be able to update their facebook, the finer details of controlling a computer aren't, and shouldn't be important to them, and that's okay!
Humanity's goal should have been to make the smartest human possible, but no one got the memo, so we're raising the bar by augmenting everyone with technology instead of implementing eugenics programs.
Bank account + id + browser: has all the tools it needs to do many things:
- earn money - allocate money - create accounts - delegate physical jobs to humans
Create his own self loop in a server. Create a server account, use credit card + id provided, self host his own code… can now focus on getting more resources.
I think screenshots are a really good and robust idea. It bothers the more structured-minded people, but apps are often not built so well. They are built until the point that it looks fine and people are able to use it. I'm pretty sure people who rely on accessibility systems have lots of complaints about this.
In the end I did manage to get it to play the housepriceguess game:
https://www.youtube.com/watch?v=nqYLhGyBOnM
I think I'll make that my equivalent of Simon Willison's "pelican riding a bicycle" test. It is fairly simple to explain but seems to trip up different LLMs in different ways.
Unfortunately it really struggled in the demos for me. It took nearly 18 attempts to click the comment link on the HN demo, each a few pixels off.
How am I supposed to use this. I really can’t think of one, but I don’t want to be blind-sighted as obviously a lot of money is going into this.
I also appreciate the tech behind it and functionality, but I still wonder for use cases
*Tedious to complete, easy to verify
BPO: Filling out doctor licensing applications based on context from a web profile HR: Move candidate information from an ATS to HRIS system Logistics: Fill out a shipping order form based on a PDF of the packing label
* Interact with a diverse set of sites for a single workflow
Real estate: Diligence on properties that involves interacting with one of many county records websites Freight forwarding: Check the status of shipping containers across 1 of 50 port terminal sites Shipping: Post truck load requests across multiple job board sites BPO: Fetch the status of a Medicare coverage application from 1 of 50 state sites BPO: Fill out medical license forms across multiple state websites
* Periodic syncs between various systems of record
Clinical: Copy patient insurance info from Zocdoc into an internal system HR: Move candidate information from an ATS to HRIS system Customer onboarding: Create Salesforce tickets based on planned product installations that are logged in an internal system Logistics: Update the status of various shipments using tracking numbers on the USPS site
* PDF extraction to system interaction
Insurance: A broker processes a detailed project overview and creates a certificate of insurance with the specific details from the multi-page document by filling out an internal form Logistics: Fill out a shipping order form based on a PDF of the packing label Clinical: Enter patient appointment information into an EHR system based on a referral PDF Accounting: Extract invoice information from up to 50+ vendor formats and enter the details into a Google sheet without laborious OCR setup for specific formats Mortgage: Extract realtor names and address from a lease document and look up the license status on various state portals
* Self healing broken RPA workflows
Seems similar to the Amazon Nova Act API which is still in research preview.
New technology is slow due to risk aversion, it's very rare for people to just tear up what they already have to re-implement new technology from the ground up. We always have to shoe-horn new technology into old systems to prove it first.
There are just so many factors that get solved by working with what already exists.
I am more leaning into the idea that an efficient self-driving car wouldn't even need to have a steering wheel, pedals, or thin pillars to help the passengers see the outside environment or be seen by pedestrians.
The way this ties back to the computer use models is that a lot of webpages have stuff designed for humans would make it difficult for a model to navigate them well. I think this was the goal of the "semantic web".
While the self-driving car industry aims to replace all humans with machines, I don't think this is the case with browser automation.
I see this technology as more similar to a crash dummy than a self-driving system. It's designed to simulate a human in very niche scenarios.
[Looks around and sees people not making APIs for everything]
Well that didn't work.
That'd be neat.
But for now: The web exists, and is universal. We have programs that can render websites to an image in memory (solved for ~30 years), and other programs that can parse images of fully-rendered websites (solved for at least a few years), along with bots that can click on links (solved much more recently).
Maybe tomorrow will be different.
It'll never happen, so companies need to deal with the reality we have.
If you want to make something that can book every airline? Better be able to navigate a website.
That's just one obvious example, but the principle holds more generally.
There's a goldmine to be had in automating ancient workflows that keep large corps alive.
Impressive tech nonetheless
See this section: https://googledevai.devsite.corp.google.com/gemini-api/docs/...
And the repo has a sample setup for using the default computer use tool: https://github.com/google/computer-use-preview
Having an AI handle this may be inefficient, but as it uses the existing user interfaces, it might allow bypassing years of bureaucracy, and when the bureaucracy tries to fight it to justify its existence, it can fight it out with the EVERYONE MUST USE AI OR ELSE layers of management, while I can finally automate that idiotic task (using tens of kilowatts rather than a one-liner, but still better than having to do it by hand).
If such AI tools allow to automate this soulcrushing drudgery, it will be great. I know that you can technically script things Selenium, AutoHotkey whatnot. But you can imagine that it's a nonstarter in a regular office. This kind of tool could make things like that much more efficient. And it's not like it will then obviate the jobs entirely (at least not right away). These offices often have immense backlogs and are understaffed as is.
Few things are more frustrating for a team than maintaining a slow E2E browser test suite.
> I am back in the Google Sheet. I previously typed "Zip Code" in F1, but it looks like I selected cell A1 and typed "A". I need to correct that first. I'll re-type "Zip Code" in F1 and clear A1. It seems I clicked A1 (y=219, x=72) then F1 (y=219, x=469) and typed "Zip Code", but then maybe clicked A1 again.
more like continued employment.
On a serious note, What the fuck is happening in the world.
Reminds me of an anecdote where Amazon invested howevermany personlives in building AI for Alexa, only to discover that alarms, music, and weather make up the large majority of things people actually use smart speakers for. They're making these things worse at their main jobs so they can sell the sizzle of AI to investors.
It's like google staff are saying "If it means promotion, we don't give a shit about users".