Gemini 2.5 Computer Use model

(blog.google)

409 points | by mfiguiere 8 hours ago

53 comments

  • xnx 8 hours ago
    I've had good success with the Chrome devtools MCP (https://github.com/ChromeDevTools/chrome-devtools-mcp) for browser automation with Gemini CLI, so I'm guessing this model will work even better.
    • arkmm 8 hours ago
      What sorts of automations were you able to get working with the Chrome dev tools MCP?
      • odie5533 7 hours ago
        Not OP, but in my experience, Jest and Playwright are so much faster that it's not worth doing much with the MCP. It's a neat toy, but it's just too slow for an LLM to try to control a browser using MCP calls.
        • atonse 6 hours ago
          Yeah I think it would be better to just have the model write out playwright scripts than the way it's doing it right now (or at least first navigate manually and then based on that, write a playwright typescript script for future tests).

          Cuz right now it's way too slow... perform an action, then read the results, then wait for the next tool call, etc.

          • omneity 5 hours ago
            This is basically our approach with Herd[0]. We operate agents that develop, test and heal trails[1, 2], which are packaged browser automations that do not require browser use LLMs to run and therefore are much cheaper and reliable. Trail automations are then abstracted as a REST API and MCP[3] which can be used either as simple functions called from your code, or by your own agent, or any combination of such.

            You can build your own trails, publish them on our registry, compose them ... You can also run them in a distributed fashion over several Herd clients where we take care of the signaling and communication but you simply call functions. The CLI and npm & python packages [4, 5] might be interesting as well.

            Note: The automation stack is entirely home-grown to enable distributed orchestration, and doesn't rely on puppeteer nor playwright but the browser automation API[6] is relatively similar to ease adoption. We also don't use the Chrome Devtools Protocol and therefore have a different tradeoff footprint.

            0: https://herd.garden

            1: https://herd.garden/trails

            2: https://herd.garden/docs/trails-automations

            3: https://herd.garden/docs/reference-mcp-server

            4: https://www.npmjs.com/package/@monitoro/herd

            5: https://pypi.org/project/monitoro-herd/

            6: https://herd.garden/docs/reference-page

            • atonse 5 hours ago
              Whoa that’s cool. I’ll check it out, thanks!
              • omneity 4 hours ago
                Thanks! Let me know if you give it a shot and I’ll be happy to help you with anything.
          • drewbeck 4 hours ago
            > or at least first navigate manually and then based on that, write a playwright typescript script for future tests

            This has always felt like a natural best use for LLMs - let them "figure something out" then write/configure a tool to do the same thing. Throwing the full might of an LLM every time you're trying to do something that could be scriptable is a massive waste of compute, not to mention the inconsistent LLM output.

          • nkko 3 hours ago
            Exactly this. I’ve spent some time last week at a 50 something people web agency helping them setup QA process where agents explore the paths and based on those passes write automated scripts that humans verify and put into testing flow.
        • typpilol 4 hours ago
          You can use it for debugging with the llm though.
          • rs186 4 hours ago
            In theory or in practice?
    • iLoveOncall 7 hours ago
      This has absolutely nothing in common with a model for computer use... This uses pre-defined tools provided in the MCP server by Google, nothing to do with a general model supposed to work for any software.
  • mohsen1 5 hours ago
    > Solve today's Wordle

    Stucks with:

    > ...the task is just to "solve today's Wordle", and as a web browsing robot, I cannot actually see the colors of the letters after a guess to make subsequent guesses. I can enter a word, but I cannot interpret the feedback (green, yellow, gray letters) to solve the puzzle.

    • Havoc 4 hours ago
      So I guess it’s browsing in grey scale?
  • phamilton 8 hours ago
    It successfully got through the captcha at https://www.google.com/recaptcha/api2/demo
    • simonw 7 hours ago
      Post edited: I was wrong about this. Gemini tried to solve the Google CAPTCHA but it was actually Browserbase that did the solve, notes here: https://simonwillison.net/2025/Oct/7/gemini-25-computer-use-...
      • dhon_ 4 hours ago
        I was concerned there might be sensitive info leaked in the browserbase video at 0:58 as it shows a string of characters in the browser history:

            nricy.jd t.fxrape oruy,ap. majro
        
        3 groups of 8 characters, space separated followed by 5 for a total of 32 characters. Seemed like text from a password generator or maybe an API key? Maybe accidentally pasted into the URL bar at one point and preserved in browser history?

        I asked ChatGPT about it and it revealed

            Not a password or key — it’s a garbled search query typed with the wrong keyboard layout.
            
            If you map the text from Dvorak → QWERTY,
            nricy.jd t.fxrape oruy,ap. majro → “logitech keyboard software macos”.
        • fn-mote 3 hours ago
          This is the kind of response that makes me feel like we are getting left behind by the LLM.

          Very nice solve, ChatGPT.

        • MrToadMan 26 minutes ago
          Is this as impressive as it initially seems though? A Bing search for the text shows up some Web results for Dvorak to QWERTY conversion, I think because the word ‘t.fxrape’ (keyboard) hits. So there’s a lot of good luck happening there.
          • dhon_ 9 minutes ago
            Here's the chat session - you can expand the thought process and see that it tried a few things (hands misaligned with the keyboard for example) before testing the Dvorak keyboard layout idea.

            https://chatgpt.com/share/68e5e68e-00c4-8011-b806-c936ac657a...

            I also found it interesting that despite me suggesting it might be a password generator or API key, ChatGPT doesn't appear to have given that much consideration.

      • pants2 6 hours ago
        Interesting that they're allowing Gemini to solve CAPTCHAs because OpenAI's agent detects and forces user-input for CAPTCHAs despite being fully able to solve them
        • throwaway-0001 29 minutes ago
          Just a matter of time until they lose customer base to other AI tools. Why would I waste my time when the AI is capable to do, and forces me to do unnecessary work. Same as Claude, can’t even draft an email in gmail, too afraid to type…
    • jampa 7 hours ago
      The automation is powered through Browserbase, which has a captcha solver. (Whether it is automated or human, I don't know.)
    • siva7 7 hours ago
      probably because its ip is coming from googles own subnet
      • asadm 7 hours ago
        isnt it coming from browserbase container?
        • ripbozo 7 hours ago
          Interestingly the IP I got when prompting `what is my IP` was `73.120.125.54` - which is a residential comcast IP.
          • martinald 7 hours ago
            Looks like browserbase has proxies, which will be often residential IPs.
  • jcims 7 hours ago
    (Just using the browserbase demo)

    Knowing it's technically possible is one thing, but giving it a short command and seeing it go log in to a site, scroll around, reply to posts, etc. is eerie.

    Also it tied me at wordle today, making the same mistake I did on the second to lass guess. Too bad you can't talk to it while it's working.

  • albert_e 1 hour ago
    I believe it will need very capable but small VLMs that understand common User Interfaces very well -- small enough to run locally -- paired with any other higher level models on the cloud, to achieve human-speed interactions and beyond with reliability.
  • dekhn 6 hours ago
    Many years ago I was sitting at a red light on a secondary road, where the primary cross road was idle. It seemed like you could solve this using a computer vision camera system that watched the primary road and when it was idle, would expedite the secondary road's green light.

    This was long before computer vision was mature enough to do anything like that and I found out that instead, there are magnetic systems that can detect cars passing over - trivial hardware and software - and I concluded that my approach was just far too complicated and expensive.

    Similarly, when I look at computers, I typically want the ML/AI system to operate on a structured data that is codified for computer use. But I guess the world is complicated enough and computers got fast enough that having an AI look at a computer screen and move/click a mouse makes sense.

    • chrisfosterelli 5 hours ago
      Ironically now that computer vision is commonplace, the cameras you talk about have become increasingly popular over the years because the magnetic systems do not do a very good job of detecting cyclists and the cameras double as a congestion monitoring tool for city staff.
      • apwell23 4 hours ago
        > the cameras you talk about have become increasingly popular over the years

        cameras are being used to detect traffic and change lights? i don't think thats happening in USA.

        which country are you referring to here?

        • ssl-3 58 minutes ago
          It's been happening in the USA for quite a long time.

          Anecdotally, the small city I grew up in, in Ohio (USA), started using cameras and some kind of computer vision to operate traffic signals 15 or 20 years ago, replacing inductive loops.

          I used to hang out sometimes with one of the old-timers who dealt with it as part of his long-time street department job. I asked him about that system once (over a decade ago now) over some drinks.

          "It doesn't fuckin' work," I remember him flatly telling me before he quite visibly wanted to talk about anything other than his day job.

          The situation eventually improved -- presumably, as bandwidth and/or local processing capabilities have also improved. It does pretty well these days when I drive through there, and the once-common inductive loops (with their tell-tale saw kerfs in the asphalt) seem to have disappeared completely.

          (And as a point of disambiguation: They are just for controlling traffic lights. There have never been any speed or red light cameras in that city. And they're distinctly separate from traffic preemption devices, like the Opticom system that this city has used for an even longer time.)

          ---

          As a non-anecdotal point of reference, I'd like to present an article from ~20 years ago about a system in a different city in the US that was serving a similar function at that time:

          https://www.toacorn.com/articles/traffic-cameras-are-not-spy...

        • chrisfosterelli 4 hours ago
          Yes. I can't speak to the USA, as I'm from Canada, but I've had conversations with traffic engineers from another city about it and increasingly seen them in my own city. Here's an example of one of the systems: https://www.iteris.com/oursolutions/pedestrian-cyclist-safet...

          They're obviously more common in higher density areas with better cycling infrastructure. The inductive loops are effectively useless with carbon fibre bicycles especially, so these have been a welcome change. But from what I was told these also are more effective for vehicle traffic than the induction loops as drivers often come to a stop too far back to be detected, plus these also allow conditional behaviour based on the number of vehicles waiting and their lanes (which can all be changed without ripping up the road).

        • baby_souffle 21 minutes ago
          > cameras are being used to detect traffic and change lights? i don't think thats happening in USA.

          Has been for the better part of a decade. Google `Iteris Vantage` and you will see some of the detection systems.

        • itsmartapuntocm 4 hours ago
          They’re extremely common in the U.S. now.
          • mh- 3 hours ago
            Traffic cameras, yes. Traffic cameras that are used to influence traffic signaling? I've never (knowingly) seen one in the US.

            What US cities have these?

          • apwell23 4 hours ago
            any data to share ? i've never seen one in chicago. google tells me its <1%. maybe i am not using right keywords.
      • y0eswddl 5 hours ago
        and soon/now triple as surveillance.
        • Spooky23 2 hours ago
          Those cameras aren’t usually easily or cheaply adapted to surveillance. Most are really simple and don’t have things like reliable time sync. Also, road jurisdictions are really complex and surveillance requires too much coordination. State, county, town, city all have different bureaucratic processes and funding models.

          Surveillance is all about Flock. The feds are handing out grants to everyone, and the police drop the things everywhere. They can locate cars, track routine trips, and all sorts of creepy stuff.

          • khm 43 minutes ago
            In my city, cameras for traffic light control are on almost every signalized intersection, and the video is public record and frequently used to review collisions. These cameras are extremely cheaply and easily adapted to surveillance. Public records are public records statewide.
          • gxs 1 hour ago
            With all due respect, you are kidding yourself if you think those cameras aren’t used for surveillance/ logging

            They don’t have to be “adapted” to surveillance - they are made with that in mind

            Obviously older generations of equipment aren’t included here - so technically you may be correct for old/outdated equipment installed areas that aren’t of interest

        • VirgilShelton 4 hours ago
          [flagged]
          • dang 52 minutes ago
            Please don't rewrite your comment like this once it has replies. It deprives the replies of their original context, making the thread less readable.
          • slashdev 4 hours ago
            Until the politicians want to come after you for posts you make on HN, or any other infraction they decide is now an issue.

            History is littered with the literal bones of people who thought they had nothing to fear from the state. The state is not your friend, and is not looking out for you.

          • serf 4 hours ago
            go watch any movie about a panopticon for the (overdiscussed) side-effects of a surveillance state.

            Fiction works, but if you want to spend the evening depressed then go for any East/West Germany (true) stories.

            For it or against surveillance and I can understand, but just not understanding the issue? No excuses -- personal surveillance for the sake of the state is one of the most discussed social concepts in the world.

            • ericd 2 hours ago
              The Lives of Others is a great one about the Stasi.
          • SXX 3 hours ago
            You have nothing to hide until you automatically marked for whatever and then judged also automatically by a buggy hallucinating AI overlord.

            Might be because pattern on your face or T-shirt match something bad.

            And this kind of stuff already happened in UK even before "AI craze". Hundreds of people been imprisoned because of faulty accounting system:

            https://en.m.wikipedia.org/wiki/British_Post_Office_scandal

            "Computer says you go to prison"!

          • BlaDeKke 4 hours ago
            I don’t have a problem with the camera as much as with the system behind it.
          • insamniac 4 hours ago
            Nothing to hide for now.
          • drumttocs8 4 hours ago
            You have nothing to hide given the current, reasonable definition of crime.

            What if that changes?

          • alvah 4 hours ago
            The number of people who cling to this view is frankly astonishing. History books are still a thing, right?
          • reaperducer 3 hours ago
            I have nothing to hide

            Great! Then you don't mind telling us your email password!

            • hn_go_brrrrr 1 hour ago
              Presumably not. Whether he has any acts to keep secret is not relevant to whether he'd like to have any money left in his bank account tomorrow.
          • CamperBob2 4 hours ago
            You have no idea if you have anything to hide or not. It's not your call, and never has been.

            https://www.amazon.com/Three-Felonies-Day-Target-Innocent/dp...

          • DonHopkins 4 hours ago
            RTSP feed please!
    • pavelstoev 2 hours ago
      It was my first engineering job, calibrating those inductive loops and circuit boards on I-93, just north of Boston's downtown area. Here is the photo from 2006. https://postimg.cc/zbz5JQC0

      PEEK controller, 56K modem, Verizon telco lines, rodents - all included in one cabinet

    • dktp 6 hours ago
      I cycle a lot. Outdoors I listen to podcasts and the fact that I can say "Hey Google, go back 30sec" to relisten to something (or forward to skip ads) is very valuable to me.

      Indoors I tend to cast some show or youtube video. Often enough I want to change the Youtube video or show using voice commands - I can do this for Youtube, but results are horrible unless I know exactly which video I want to watch. For other services it's largely not possible at all

      In a perfect world Google would provide superb APIs for these integrations and all app providers would integrate it and keep it up to date. But if we can bypass that and get good results across the board - I would find it very valuable

      I understand this is a very specific scenario. But one I would be excited about nonetheless

      • Macha 5 hours ago
        Do you have a lot of dedicated cycle ways? I'm not sure I'd want to have headphones impeding my hearing anywhere I'd have to interact with cars or pedestrians while on my bike.
        • Hasnep 5 hours ago
          Lots of noise cancelling headphones have a pass-through mode that lets you hear the outside world. Alternatively, I use bone conducting headphones that leave my ears uncovered.
        • anjel 3 hours ago
        • apwell23 4 hours ago
          yes i bike on chicago lakefront up and down is like 40 miles for me.

          also biking on roads you should never count on sounds to guide you. you should always use vision. for example, making a left you have to visually establish that driver coming straight has made eye contact with you or atleast looked at you.

          can you share a example of how you are using sound to help you ride bikes with other vehicles on the road? are you maybe talking about honking? that. you will hear over podcasts.

          • 83457 1 hour ago
            Hearing is useful for safety.
          • Macha 4 hours ago
            The sound of a revving engine is often the first warning you have that someone is about to pass you and especially how they handle it is a good sign of how likely they are to attempt a close pass rather than overtake in the legal manner with the minimum distance.
            • fn-mote 4 hours ago
              Mirrors let you see the overtaking traffic with far more time to plan.

              Audio cues are less and less useful as electric vehicles become more popular. (I am a city biker and there are plenty already.)

            • fragmede 2 hours ago
              That doesn't work for EVs. Situational awareness is important, don't rely on any one thing,
      • nerdsniper 2 hours ago
    • elboru 40 minutes ago
      I recently spent some time in a country house far enough from civilization that electric lines don’t reach. The owners could have installed some solar panels, but they opted to keep it electricity-free to disconnect from technology, or at least from electronics. They have multiple decades old ingenious utensils that work without electricity, like a fridge that uses propane, oil lamps, non-electric coffee percolator, etc. and that made me wonder, how many analogous devices stopped getting invented because an electric device is the most obvious way of solving things to our current view.
    • dgs_sgd 1 hour ago
      It’s funny that you used traffic signals as an example of overcomplicating a problem with AI because there turns out to be a YC funded startup making AI powered traffic lights: https://www.ycombinator.com/companies/roundabout-technologie...
      • MrToadMan 8 minutes ago
        And even funnier in that context: it’s called ‘roundabout technologies’.
    • nerdsniper 2 hours ago
      My town solved this at night by putting simple light sensors on the traffic lights so as you approach you can flash ur brights at it and it triggers a cycle.

      Otherwise the higher traffic road got a permanent green light at nighttime until it saw high beams or magnetic flux from a car reaching the intersection.

    • alach11 3 hours ago
      Computer use is the most important AI benchmark to watch if you're trying to forecast labor-market impact. You're right, there are much more effective ways for ML/AI systems to accomplish tasks on the computer. But they all have to be hand-crafted for each task. Solving the general case is more scalable.
      • poopiokaka 2 hours ago
        Not the current benchmarks, no. The demos in this post are so slow. Between writing the prompt, waiting a long time and checking the work I’d just rather do it myself.
        • panarky 1 hour ago
          It's not about being faster than you.

          It's about working independently while you do other things.

          • ssl-3 25 minutes ago
            And it's a neat-enough idea for repetitive tasks.

            For instance: I do periodic database-level backups of a very closed-source system at work. It doesn't take much of my time, but it's annoying in its simplicity: Run this GUI Windows program, click these things, select this folder, and push the go button. The backup takes as long as it takes, and then I look for obvious signs of either completion or error on the screen sometime later.

            With something like this "Computer Use" model, I can automate that process.

            It doesn't matter to anyone at all whether it takes 30 seconds or 30 minutes to walk through the steps: It can be done while I'm asleep or on vacation or whatever.

            I can keep tabs on it with some combination of manual and automatic review, just like I would be doing if I hired a real human to do this job on my behalf.

            (Yeah, yeah. There's tons of other ways to back up and restore computer data. But this is the One, True Way that is recoverable on a blank slate in a fashion that is supported by the manufacturer. I don't get to go off-script and invent a new method here.

            But a screen-reading button-clicker? Sure. I can jive with that and keep an eye on it from time to time, just as I would be doing if I hired a person to do it for me.)

    • trenchpilgrim 6 hours ago
      FWIW those type of traffic cameras are in common use. https://www.milesight.com/company/blog/types-of-traffic-came...
      • jlhawn 6 hours ago
        The camera systems are also superior from an infrastructure maintenance perspective. You can update them with new capabilities or do re-striping without tearing up the pavement.
      • dekhn 6 hours ago
        If I read the web page, they don't actually use that as a solution to shortening a red - IMHO that has a very high safety bar compared to the more common uses. But I'd be happy to hear this is something that Just Works in the Real World with a reasonable false positive and false negative rate.
        • trenchpilgrim 6 hours ago
          Yes they do, it's listed under Traffic Sensor Cameras.
    • stronglikedan 2 hours ago
      > I concluded that my approach was just far too complicated and expensive.

      Motorcyclists would conclude that your approach would actually work.

    • TeMPOraL 5 hours ago
      > But I guess the world is complicated enough and computers got fast enough that having an AI look at a computer screen and move/click a mouse makes sense.

      It's not that the world is particularly complicated here - it's just that computing is a dynamic and adversarial environment. End-user automation consuming structured data is a rare occurrence not because it's hard, but because it defeats pretty much every way people make money on the Internet. AI is succeeding now because it is able to navigate the purposefully unstructured and obtuse interfaces like a person would.

    • yunyu 6 hours ago
      There is a lot of pretraining data available around screen recordings and mouse movements (Loom, YouTube, etc). There is much less pretraining data available around navigating accessibility trees or DOM structures. Many use cases may also need to be image aware (document scan parsing, looking at images), and keyboard/video/mouse-based models generalize to more applicants.
    • ge96 6 hours ago
      It's funny I'll sometimes scoot forward/rock my car but I'm not sure if it's just coincidence. Also a lot of stop lights now have that tall white camera on top.
      • netghost 6 hours ago
        There's several mechanisms. The most common is (or at least was) a loop detector under the road that triggers when a vehicle is over it. Sometimes if you're not quite over it, or it's somewhat faulty that will trigger it.
      • Spooky23 6 hours ago
        Sometimes the rocking helps with a ground loop that isn’t working well.
      • bozhark 6 hours ago
        Like flashing lights for the first responders sensor
    • sagarm 3 hours ago
      Robotic process automation isn't new.
    • VirgilShelton 4 hours ago
      The best thing about being nerds like we are is we can just ignore this product since it's not for us.
  • xrd 34 minutes ago
    I really want this model to try userinyerface.com
  • dekhn 6 hours ago
    I just have to say that I consider this an absolutely hilarious outcome. For many years, I focused on tech solutions that eliminated the need for a human to be in front of a computer doing tedious manual operations. For a wide range of activities, I proposed we focus on "turning everything in the world into database objects" so that computers could operate on them with minimal human effort. I spent significant effort in machine learning to achieve this.

    It didn't really occur to me that you could just train a computer to work directly on the semi-structured human world data (display screen buffer) through a human interface (mouse + keyboard).

    However, I fully support it (like all the other crazy ideas on the web that beat out the "theoretically better" approaches). I do not think it is unrealistic to expect that within a decade, we could have computer systems that can open chrome, start a video chat with somebody, go back and forth for a while to achieve a task, then hang up... with the person on the other end ever knowing they were dealing with a computer instead of a human.

    • TeMPOraL 5 hours ago
      AI is succeeding where "theoretically better" approaches failed, because it addresses the underlying social problem. The computing ecosystem is an adversarial place, not a cooperative one. The reason we can't automate most of the tedium is by design - it's critical to how almost all money is made on the Internet. Can't monetize users when they automate your upsell channels and ad exposure away.
    • NothingAboutAny 4 hours ago
      I saw similar discussions around robotics, people saying "why are they making the robots humanoid? couldn't they be a more efficient shape" and it comes back to the same thing where if you want the tool to be adopted then it has to fit in a human-centric world no matter how inefficient that is. high performance applications are still always custom designed and streamlined, but mass adoption requires it to fit us not us to fit it.
    • neom 3 hours ago
      I was thinking about that last point in the context of dating this morning, if my "chatgpt" knew enough about me to represent me well enough that a dating app could facilitate a pre-screening with someone else "chatgpt", that would be interesting. I heard someone in an enterprise keynote recently talking about "digital twins" - I believe this is that. Not sure what I think about it yet generally, or where it leads.
    • ncallaway 5 hours ago
      > we could have computer systems that can open chrome, start a video chat with somebody, go back and forth for a while to achieve a task, then hang up... with the person on the other end ever knowing they were dealing with a computer instead of a human.

      Doesn't that...seem bad?

      I mean, it would certainly be a monumental and impressive technical accomplishment.

      But it still seems...quite bad to me.

      • dekhn 4 hours ago
        Good or bad? I don't know. It just seems inevitable.
        • fn-mote 3 hours ago
          The main reason you might not know if it is a human or not is that the human interactions are so bad (eg help desk call, internet provider, any utility, even the doctor’s office front line non-medical staff).
  • derekcheng08 1 hour ago
    Really feels like computer use models may be vertical agent killers once they get good enough. Many knowledge work domains boil down to: use a web app, send an email. (e.g. recruiting, sales outreach)
  • ramoz 7 hours ago
    This will never hit a production enterprise system without some form of hooks/callbacks in place to instill governance.

    Obviously much harder with UI vs agent events similar to the below.

    https://docs.claude.com/en/docs/claude-code/hooks

    https://google.github.io/adk-docs/callbacks/

    • serf 4 hours ago
      >This will never hit a production enterprise system without some form of hooks/callbacks in place to instill governance.

      knowing how many times Claude Code breezed through a hook call and threw it away after actually computing the hook for an answer and then proceeding to not integrate the hook results ; I think the concept of 'governance' is laughable.

      LLMs are so much further from determinism/governance than people seem to realize.

      I've even seen earlier CC breeze through a hook that ends with a halting test failure and "DO NOT PROCEED" verbage. The only hook that is guaranteed to work on call is a big theoretical dangerous claude-killing hook.

      • poopiokaka 2 hours ago
        You can obviously hard code a hook
    • peytoncasper 6 hours ago
      Hi! I work in identity products at Browserbase. I’ve spent a fair amount of time lately thinking about how to layer RBAC across the web.

      Do you think callbacks are how this gets done?

      • ramoz 6 hours ago
        Disclaimer: Im a cofounder, we focus critical spaces with AI. Also i was the feature request for claude code hooks.

        But my bet - we will not deploy a single agent into any real environment without deterministic guarantees. Hooks are a means...

        Browserbase with hooks would be really powerful, governance beyond RBAC (but of course enabling relevant guardrailing as well - "does agent have permission to access this sharepoint right now, within this context, to conduct action x?").

        I would love to meet with you actually, my shop cares intimately about agent verification and governance. Soon to release the tool I originally designed for claude code hooks.

  • enjoylife 5 hours ago
    > It is not yet optimized for desktop OS-level control

    Alas, AGI is not yet here. But I feel like if this OS-level of control was good enough, and the cost of the LLM in the loop wasn't bad, maybe that would be enough to kick start something akin to AGI.

    • pseidemann 5 hours ago
      Funny thing is, most humans cannot properly control a computer. Intelligence seems to be impossible to define.
      • emp17344 4 hours ago
        Is this a joke, or do you actually believe most people are incapable of using a computer?
        • DrewADesign 2 hours ago
          It’s the same old superiority complex that birthed the “IT Guy” stereotypes of the 90s/aughts. It stems from a) not understanding what problems non-developers need computers to solve for them, and b) ego-driven overestimation of the complexity of their field compared to others.
        • fragmede 2 hours ago
          We should be very specific and careful with our words. pseidemann said "most humans cannot properly control a computer", which isn't the same as "most people are incapable of using a computer".

          I would agree with pseidemann. There's a level of understanding and care and focus that most people lack. That doesn't make those people less worthy of love and care and support, and computers are easier to use than ever. Most people don't know what EFI is, nor should they have to. If all someone needs from the computer to be able to update their facebook, the finer details of controlling a computer aren't, and shouldn't be important to them, and that's okay!

          Humanity's goal should have been to make the smartest human possible, but no one got the memo, so we're raising the bar by augmenting everyone with technology instead of implementing eugenics programs.

      • a_wild_dandan 5 hours ago
        Intelligence is whatever an LLM can’t do yet. Fluid intelligence is the capacity to quickly move goal posts.
    • alganet 5 hours ago
      I am curious. Why do you think controlling an OS (and not just a browser) would be a move towards AGI?
      • enjoylife 2 hours ago
        One thought is once it can touch the underlying system, it can provision resources, spawn processes, and persist itself, crossing the line from tool to autonomous entity. I admit you could do that in a browser shell nowadays, just maybe with more restrictions and guardrails. I don’t have any strong opinions here, but I do think a lower cost to escape the walled gardens agi starts in will be a factor
      • throwaway-0001 20 minutes ago
        Not just the os, but browsing control is enough to do 99% of the things he would want autonomously.

        Bank account + id + browser: has all the tools it needs to do many things:

        - earn money - allocate money - create accounts - delegate physical jobs to humans

        Create his own self loop in a server. Create a server account, use credit card + id provided, self host his own code… can now focus on getting more resources.

  • CuriouslyC 7 hours ago
    I feel like screenshots should be the last thing you reach for. There's a whole universe of data from accessibility subsystems.
    • bonoboTP 7 hours ago
      The rendered visual layout is designed in a way to be spatially organized perceptually to make sense. It's a bit like PDFs. I imagine that the underlying hierarchy tree can be quite messy and spaghetti, so your best bet is to use it in the form that the devs intended and tested it for.

      I think screenshots are a really good and robust idea. It bothers the more structured-minded people, but apps are often not built so well. They are built until the point that it looks fine and people are able to use it. I'm pretty sure people who rely on accessibility systems have lots of complaints about this.

      • CuriouslyC 6 hours ago
        The progressives were pretty good at pushing accessibility in applications, it's not perfect but every company I've worked with since the mid 2010s has made a big todo about accessibility. For stuff on linux you can instrument observability in a lot of different ways that are more efficient than screenshots, so I don't think it's generally the right way to move forward, but screenshots are universal and we already have capable vision models so it's sort of a local optimization move.
    • ekelsen 7 hours ago
      and all sorts of situations where they don't work. When they do work it's great, but if they don't and you rely on them, you have nothing.
      • CuriouslyC 6 hours ago
        Oh yeah, using all available data channels in proportion to their cost and utility is the right choice, 100%.
  • realty_geek 6 hours ago
    Absolutely hilarious how it gets stuck trying to solve captcha each time. I had to explicitly tell it not to go to google first.

    In the end I did manage to get it to play the housepriceguess game:

    https://www.youtube.com/watch?v=nqYLhGyBOnM

    I think I'll make that my equivalent of Simon Willison's "pelican riding a bicycle" test. It is fairly simple to explain but seems to trip up different LLMs in different ways.

  • whinvik 7 hours ago
    My general experience has been that Gemini is pretty bad at tool calling. The recent Gemini 2.5 Flash release actually fixed some of those issues but this one is Gemini 2.5 Pro with no indication about tool calling improvements.
  • AaronAPU 6 hours ago
    I’m looking forward to a desktop OS optimized version so it can do the QA that I have no time for!
  • martinald 6 hours ago
    Interesting, seems to use 'pure' vision and x/y coords for clicking stuff. Most other browser automation with LLMs I've seen uses the dom/accessibility tree which absolutely churns through context, but is much more 'accurate' at clicking stuff because it can use the exact text/elements in a selector.

    Unfortunately it really struggled in the demos for me. It took nearly 18 attempts to click the comment link on the HN demo, each a few pixels off.

    • pbhjpbhj 4 hours ago
      18 attempts - emulating the human HN experience when using mobile. Well, assuming it hit other links it didn't intend to anyway. /jk
  • jwpapi 2 hours ago
    Can somebody give me use cases that are faster than using an UX?

    How am I supposed to use this. I really can’t think of one, but I don’t want to be blind-sighted as obviously a lot of money is going into this.

    I also appreciate the tech behind it and functionality, but I still wonder for use cases

    • omkar_savant 2 hours ago
      At current latency, there's a bunch of async automation usecases that one could use this for. For example:

      *Tedious to complete, easy to verify

      BPO: Filling out doctor licensing applications based on context from a web profile HR: Move candidate information from an ATS to HRIS system Logistics: Fill out a shipping order form based on a PDF of the packing label

      * Interact with a diverse set of sites for a single workflow

      Real estate: Diligence on properties that involves interacting with one of many county records websites Freight forwarding: Check the status of shipping containers across 1 of 50 port terminal sites Shipping: Post truck load requests across multiple job board sites BPO: Fetch the status of a Medicare coverage application from 1 of 50 state sites BPO: Fill out medical license forms across multiple state websites

      * Periodic syncs between various systems of record

      Clinical: Copy patient insurance info from Zocdoc into an internal system HR: Move candidate information from an ATS to HRIS system Customer onboarding: Create Salesforce tickets based on planned product installations that are logged in an internal system Logistics: Update the status of various shipments using tracking numbers on the USPS site

      * PDF extraction to system interaction

      Insurance: A broker processes a detailed project overview and creates a certificate of insurance with the specific details from the multi-page document by filling out an internal form Logistics: Fill out a shipping order form based on a PDF of the packing label Clinical: Enter patient appointment information into an EHR system based on a referral PDF Accounting: Extract invoice information from up to 50+ vendor formats and enter the details into a Google sheet without laborious OCR setup for specific formats Mortgage: Extract realtor names and address from a lease document and look up the license status on various state portals

      * Self healing broken RPA workflows

    • viking123 36 minutes ago
      Filling your CV in the workday applications
  • zomgbbq 4 hours ago
    I would love to use this for E2E testing. It would be great to make all my assertions with high level descriptions so tests are resilient to UI changes.

    Seems similar to the Amazon Nova Act API which is still in research preview.

  • omkar_savant 7 hours ago
    Hey - I'm on the team that launched this. Please let me know if you have any questions!
    • SoKamil 6 hours ago
      How are you going to deal with reCAPTCHA and ad impressions? Sounds like a conflict of interest.
      • omkar_savant 1 hour ago
        No easy answers on this one unfortunately, lots of conversations ongoing on these - but our default stance has been to hand back control to the user in cases of captcha and have them solve these when they arise.
  • jsrozner 3 hours ago
    The irony is that most of tech companies make their money by forcing users to wade through garbage. For example, if you could browse the internet and avoid ads, why wouldn't you? If you could choose what twitter content to see outside of their useless algorithms, why wouldn't you?
  • password54321 7 hours ago
    doesn't seem like it makes sense to train AI around human user interfaces which aren't really efficient. It is like building a mechanical horse.
    • jklinger410 7 hours ago
      Why do you think we have fully self driving cars instead of just more simplistic beacon systems? Why doesn't McDonald's have a fully automated kitchen?

      New technology is slow due to risk aversion, it's very rare for people to just tear up what they already have to re-implement new technology from the ground up. We always have to shoe-horn new technology into old systems to prove it first.

      There are just so many factors that get solved by working with what already exists.

      • layman51 7 hours ago
        About your self-driving car point, I feel like the approach I'm seeing is akin to designing a humanoid robot that uses its robotic feet to control the brake and accelerator pedals, and its hand to move the gear selector.
        • bonoboTP 7 hours ago
          Yeah, that would be pretty good honestly. It could immediately upgrade every car ever made to self driving and then it could also do your laundry without buying a new washing machine and everything else. It's just hard to do. But it will happen.
          • layman51 6 hours ago
            Yes, it sounds very cool and sci-fi, but having a humanoid control the car seems less safe than having the spinning cameras and other sensors that are missing from older cars or those that weren't specifically built to be self-driving. I suppose this is why even human drivers are assisted by automatic emergency braking.

            I am more leaning into the idea that an efficient self-driving car wouldn't even need to have a steering wheel, pedals, or thin pillars to help the passengers see the outside environment or be seen by pedestrians.

            The way this ties back to the computer use models is that a lot of webpages have stuff designed for humans would make it difficult for a model to navigate them well. I think this was the goal of the "semantic web".

        • iAMkenough 7 hours ago
          I could add self-driving to my existing fleet? Sounds intriguing.
      • alganet 4 hours ago
        > Why do you think we have fully self driving cars instead of just more simplistic beacon systems?

        While the self-driving car industry aims to replace all humans with machines, I don't think this is the case with browser automation.

        I see this technology as more similar to a crash dummy than a self-driving system. It's designed to simulate a human in very niche scenarios.

    • pixl97 7 hours ago
      Right, let's make APIs for everything...

      [Looks around and sees people not making APIs for everything]

      Well that didn't work.

      • odie5533 7 hours ago
        Every website and application is just layers of data. Playwright and similar tools have options for taking Snapshots that contain data like text, forms, buttons, etc that can be interacted with on a site. All the calls a website makes are just APIs. Even a native application is made up of WinForms that can be inspected.
        • pixl97 7 hours ago
          Ah, so now you're turning LLMs into web browsers capable of parsing Javascript to figure out what a human might be looking at, let's see how many levels deep we can go.
          • measurablefunc 6 hours ago
            Just inspect the memory content of the process. It's all just numbers at the end of the day & algorithms do not have any understanding of what the numbers mean other than generating other numbers in response to the input numbers. For the record I agree w/ OP, screenshots are not a good interface for the same reasons that trains, subways, & dedicates lanes for mass transit are obviously superior to cars & their associated attendant headaches.
            • ssl-3 6 hours ago
              Maybe some day, sure. We may eventually live in a utopia where everyone has quick, efficient, accessible mass transit available that allows them to move between any two points on the globe with unfettered grace.

              That'd be neat.

              But for now: The web exists, and is universal. We have programs that can render websites to an image in memory (solved for ~30 years), and other programs that can parse images of fully-rendered websites (solved for at least a few years), along with bots that can click on links (solved much more recently).

              Maybe tomorrow will be different.

              • measurablefunc 6 hours ago
                Point was process memory is the source of truth, everything else is derived & only throws away information that a neural network can use to make better decisions. Presentation of data is irrelevant to a neural network, it's all just numbers & arithmetic at the end of the day.
    • TulliusCicero 7 hours ago
      This is just like the comments suggesting we need sensors and signs specifically for self-driving cars for them to work.

      It'll never happen, so companies need to deal with the reality we have.

      • password54321 6 hours ago
        We can build tons of infrastructure for cars that didn't exist before but can't for other things anymore? Seems like society is just becoming lethargic.
        • TulliusCicero 14 minutes ago
          No, it's just hilariously impractical if you bother to think about it for more than five seconds.
    • michaelt 7 hours ago
      In my country there's a multi-airline API for booking plane tickets, but the cheapest of economy carriers only accept bookings directly on their websites.

      If you want to make something that can book every airline? Better be able to navigate a website.

      • odie5533 7 hours ago
        You can navigate a website without visually decoding the image of a website.
        • bonoboTP 7 hours ago
          Except if its a messy div soup with various shitty absolute and relative pixel offsets where the only way to know what refers to what is by rendering it and using gestalt principles.
          • measurablefunc 6 hours ago
            None of that matters to neural networks.
            • bonoboTP 6 hours ago
              It does, because it's hard to infer where each element will end up in the render. So a checkbox may be set up in a shitty way such that the corresponding text label is not properly placed in the DOM, so it's hard to tell what the checkbox controls just based on the DOM tree. You have to take into account the styling and placement pixel stuff, ie render it properly and look at it.

              That's just one obvious example, but the principle holds more generally.

              • measurablefunc 6 hours ago
                Spatial continuity has nothing to do w/ how neural networks interpret an array of numbers. In fact, there is nothing about the topology of the input that is any way relevant to what calculations are done by the network. You are imposing an anthropomorphic structure that does not exist anywhere in the algorithm & how it processes information. Here is an example to demonstrate my point: https://x.com/s_scardapane/status/1975500989299105981
                • bonoboTP 6 hours ago
                  It would have to implicitly render the HTML+CSS to know which two elements visually end up next to each other, if the markup is spaghetti and badly done.
                  • measurablefunc 5 hours ago
                    The linked post demonstrates arbitrary re-ordering of image patches. Spatial continuity is not relevant to neural networks.
                    • bonoboTP 5 hours ago
                      That's ridiculous, sorry. If that were so, we wouldn't have positional encodings in vision transformers.
                      • measurablefunc 5 hours ago
                        It's not ridiculous if you understand how neural networks actually work. Your perception of the numbers has nothing to do w/ the logic of the arithmetic in the network.
                        • bonoboTP 5 hours ago
                          Do you know what "positional encoding" means?
                          • measurablefunc 5 hours ago
                            Completely irrelevant to the point being made.
                • ionwake 6 hours ago
                  Why are you talking about image processing ? The guy you’re talking to isn’t
                  • measurablefunc 6 hours ago
                    What do you suppose "render" means?
                    • bonoboTP 5 hours ago
                      The original comment I replied to said "You can navigate a website without visually decoding the image of a website." I replied that decoding is necessary to know where the elements will end up in a visual arrangement, because often that carries semantics. A label that is rendered next to another element can be crucial for understanding the functioning of the program. It's nontrivial just from the HTML or whatever tree structure where each element will appear in 2D after rendering.
                      • measurablefunc 5 hours ago
                        2D rendering is not necessary for processing information by neural networks. In fact, the image is flattened into 1D array & loses the topological structure almost entirely b/c the topology is not relevant to the arithmetic performed by the network.
                        • bonoboTP 4 hours ago
                          I'm talking about HTML (or other markup, in the form of text) vs image. That simply getting the markup as text tokens will be much harder to interpret since it's not clear where the elements will end up. I guess I can't make this any more clear.
    • CuriouslyC 7 hours ago
      We're training natural language models to reason by emulating reasoning in natural language, so it's very on brand.
      • bonoboTP 7 hours ago
        It's on the brand of stuff that works. Expert systems and formal symbolic if-else, rules based reasoning was tried, it failed. Real life is messy and fat-tailed.
        • CuriouslyC 6 hours ago
          And yet we give agents deterministic tools to use rather than tell them to compute everything in model!
          • bonoboTP 6 hours ago
            Yes, and here they also operate deterministic GUI tools. Thing is, many GUI programs are not designed so well. Their best interface and the only interface they were tested and designed for is the visual one.
    • aidenn0 6 hours ago
      Reminds me of WALL-E where there is a keypad with a robot finger to press buttons on it.
    • golol 6 hours ago
      If we could build mechanical horses they wiuld be absolutely amazing!
    • ivape 6 hours ago
      What you say is 100% true until it’s not. It seems like a weird thing to say (what I’m saying), but please consider we’re in a time period where everything we say is true, minute by minute, and no more. It could be the next version of this just works, and works really well.
    • wahnfrieden 7 hours ago
      It's not about efficiency but access. Many services do not provide programmatic access.
  • btbuildem 3 hours ago
    Does it work with ~legacy~ software? Eg, early 2000's Windows WhateverSoft's Widget Designer? Does it interface over COM?

    There's a goldmine to be had in automating ancient workflows that keep large corps alive.

    • tech234a 2 hours ago
      “The Gemini 2.5 Computer Use model is primarily optimized for web browsers, but also demonstrates strong promise for mobile UI control tasks. It is not yet optimized for desktop OS-level control.”
  • numpad0 7 hours ago
    How big are Gemini 2.5(Pro/Flash/Lite) models in parameter counts, in experts' guesstimation? Is it towards 50B, 500B, or bigger still? Even Flash feels smart enough for vibe coding tasks.
    • thomasm6m6 7 hours ago
      2.5 Flash Lite replaced 2.0 Flash Lite which replaced 1.5 Flash 8B, so one might suspect 2.5 Flash Lite is well under 50B
  • sbinnee 4 hours ago
    I think it’s related that I got an email from google, titled “ Simplifying your Gemini Apps experience”. It reads no privacy maximize AI. They are going to automatically collect data from all google apps, and users no longer have options to control access to individual apps.
  • Havoc 4 hours ago
    At some point just having APIs for the web would just make sense. Rendering it and then throwing llms at interpreting it seems…suboptimal

    Impressive tech nonetheless

    • guelo 3 hours ago
      The programmable web is a dead 20 year old dream that was crushed by the evil tech monopolists, Facebook, Google, etc. This emerging llm based automation tech is a glimmer of hope that we will be able to regain our data and autonomy.
  • strangescript 8 hours ago
    I assume its tool calling and structured output are way better, but this model isn't in Studio unless its being silently subbed in.
  • Oras 7 hours ago
    It is actually quite good at following instructions, but I tried clicking on job application links, and since they open in a new window, it couldn't find the new window. I suppose it might be an issue with BrowserBase, or just the way this demo was set up.
    • MiguelG719 7 hours ago
      are you running into this issue on gemini.browserbase.com or the google/computer-use-preview github repo?
  • mosura 7 hours ago
    One of the slightly buried stories here is BrowserBase themselves. Great stuff.
  • orliesaurus 5 hours ago
    Does it know what's behind the "menu" of different apps? Or does it have to click on all menus and submenus to find out?
  • CryptoBanker 5 hours ago
    Unless you give it specific instructions it will google something and give you the AI generated summary as the answer
  • asadm 7 hours ago
    This is great. Now I want it to run faster than I can do it.
    • pbhjpbhj 3 hours ago
      Then it will be detected and blocked...
  • cryptoz 8 hours ago
    Computer Use models are going to ruin simple honeypot form fields meant to detect bots :(
    • layman51 7 hours ago
      You mean the ones where people add a question that is like "What is 10+3?"
    • jebronie 6 hours ago
      I just tried to submit a contact form with it. It successfully solved the ReCaptcha but failed to fill in a required field and got stuck. We're safe.
  • tgsovlerkhgsel 2 hours ago
    I'm so looking forward to it. Many of the problems that should be trivially solved with either AI or a script are hard to impossible to solve because the data is locked away in some form.

    Having an AI handle this may be inefficient, but as it uses the existing user interfaces, it might allow bypassing years of bureaucracy, and when the bureaucracy tries to fight it to justify its existence, it can fight it out with the EVERYONE MUST USE AI OR ELSE layers of management, while I can finally automate that idiotic task (using tens of kilowatts rather than a one-liner, but still better than having to do it by hand).

  • keepamovin 3 hours ago
    It’s basically an OODA loop. This is a good thing
  • amelius 5 hours ago
    Only use in environments where you can roll back everything.
  • bonoboTP 7 hours ago
    There are some absolutely atrocious UIs out there for many office workers, who spend hours clicking buttons opening popup after popup clicking repetitively on checkboxes etc. E.g. entering travel costs or somesuch in academia and elsewhere. You have no idea how annoying that type of work is, you pull out your hair. Why don't they make better UIs, you ask? If you ask, you have no idea how bad things are. Because they don't care, there is no communication, it seems fine, the software creators are hard to reach, the software is approved by people who never used it and decide based on gut feel, powerpoints and feature tickmarks. Even big name brands are horrible at this, like SAP.

    If such AI tools allow to automate this soulcrushing drudgery, it will be great. I know that you can technically script things Selenium, AutoHotkey whatnot. But you can imagine that it's a nonstarter in a regular office. This kind of tool could make things like that much more efficient. And it's not like it will then obviate the jobs entirely (at least not right away). These offices often have immense backlogs and are understaffed as is.

  • TIPSIO 7 hours ago
    Painfully slow
    • John7878781 7 hours ago
      That doesn't matter so much when it can happen in the background.
      • alganet 5 hours ago
        It matters a lot for E2E testing. I would totally replace the ease of the AI solution for a faster, more complicated one if it starts impacting build times.

        Few things are more frustrating for a team than maintaining a slow E2E browser test suite.

  • mmaunder 5 hours ago
    I prepare to be disappointed every time I click on a Google AI announcement. Which is so very unfortunate, given that they're the source of LLMs. Come on big G!! Get it together!
  • alexnewman 6 hours ago
    A year ago I did something that used rag and accessibility mode to navigate ui.
  • iAMkenough 7 hours ago
    Not great at Google Sheets. Repeatedly overwrites all previous columns while trying to populate new columns.

    > I am back in the Google Sheet. I previously typed "Zip Code" in F1, but it looks like I selected cell A1 and typed "A". I need to correct that first. I'll re-type "Zip Code" in F1 and clear A1. It seems I clicked A1 (y=219, x=72) then F1 (y=219, x=469) and typed "Zip Code", but then maybe clicked A1 again.

    • omkar_savant 1 hour ago
      Could you share your prompt? We'll look into this one
  • dude250711 7 hours ago
    Have average Google developers been told/hinted that their bonuses/promotions will be tied to their proactivity in using Gemini for project work?
    • peddling-brink 7 hours ago
      > bonuses/promotions

      more like continued employment.

      • astrange 4 hours ago
        FAANG much prefers to not pay you and let you leave on your own.
    • teaearlgraycold 7 hours ago
      I know there was a memo telling Googlers they are expected to use AI at work and it’s expected for their performance to increase as a result.
      • dude250711 7 hours ago
        The HBO's Silicon Valley ended way too soon. The plot pretty much writes itself.
        • Imustaskforhelp 7 hours ago
          Don't worry Maybe someone will create AI slop for this on Sora 2 or the likes (this was satire)

          On a serious note, What the fuck is happening in the world.

  • mianos 7 hours ago
    I sure hope this is better than pathetically useless. I assume it is to replace the extremely frustrating Gemini for Android. If I have a bluetooth headset and I try "play music on Spotify" it fails about half the time. Even with youtube music. I could not believe it was so bad so I just sat at my desk with the helmet on and tried it over and over. It seems to recognise the speech but simply fails to do anything. Brand new Pixel 10. The old speech recognition system was way dumber but it actually worked.
    • bsimpson 7 hours ago
      I was riding my motorcycle the other day, and asked my helmet to "call <friend>." Gemini infuriatingly replied "I cannot directly make calls for you. Is there something else I can help you with?" This absolutely used to work.

      Reminds me of an anecdote where Amazon invested howevermany personlives in building AI for Alexa, only to discover that alarms, music, and weather make up the large majority of things people actually use smart speakers for. They're making these things worse at their main jobs so they can sell the sizzle of AI to investors.

      • mianos 5 hours ago
        Yes, I am also talking about a Cardo. If it didn't used to work near 100% of the time this time last year it might not be so incredibly annoying, but to go from working to complete crap with no choice to be able to go back to the working system is bad.

        It's like google staff are saying "If it means promotion, we don't give a shit about users".

  • GeminiFan2025 6 hours ago
    Impressive interface interaction by Gemini 2.5. Could be great for accessibility.
  • hipassage 6 hours ago
    hi there, interesting post
  • hipassage 6 hours ago
    hi there
  • GeminiFan2025 6 hours ago
    The new Gemini 2.5 model's ability to understand and interact with computer interfaces looks very impressive. It could be a game-changer for accessibility and automation. I wonder how robust it is with non-standard UI elements.
  • hipassage 6 hours ago
    hi
  • John7878781 6 hours ago
    [dead]
  • animanoir 6 hours ago
    [dead]
  • ihsw 8 hours ago
    [dead]
  • susu12138 8 hours ago
    [flagged]
    • susu12138 8 hours ago
      It will increase gemini cli ability.