Here is a walkthrough of how it works: https://youtu.be/63co74JHy1k, and you can try it for free at https://autotab.com by downloading the app.
Why a dedicated editor?
The number one blocker we've found in building more flexible, agentic automations is performance quality BY FAR (https://www.langchain.com/stateofaiagents#barriers-and-chall...). For all the talk of cost, latency, and safety, the fact is most people are still just struggling to get agents to work. The keys to solving reliability are better models, yes, but also intent specification. Even humans don't zero-shot these tasks from a prompt. They need to be shown how to perform them, and then refined with question-asking + feedback over time. It is also quite difficult to formulate complete requirements on the spot from memory.
The editor makes it easy to build the specification up as you step through your workflow, while generating successful task trajectories for the model. This is the only way we've been able to get the reliability we need for production use cases.
But why build a browser?
Autotab started as a Chrome extension (with a Show HN post! https://news.ycombinator.com/item?id=37943931). As we iterated with users, we realized that we needed to focus on creating the control surface for intent specification, and that being stuck in a chrome sidepanel wasn't going to work. We also knew that we needed a level of control for the model that we couldn't get without owning the browser. In Autotab, the browser becomes a canvas on which the user and the model are taking turns showing and explaining the task.
Key features:
1. Self-healing automations that don't break when sites change
2. Dedicated authoring tool that builds memory for the model while defining steps for the automation
3. Control flows and deep configurability to keep automations on track, even when navigating complex reasoning tasks
4. Works with any website (no site-specific APIs needed)
5. Runs securely in the cloud or locally
6. Simple REST API + client libraries for Python, Node
We'd love to get any early feedback from the HN community, ideas for where you'd like the product to go, or experiences in this space. We will be in the comments for the next few hours to respond!
I noticed in another comment that you said some steps can be made 'optional' (e.g. clicking through a modal). In my ancient Excel macro adventure, what I learned was that I had to tweak the heck out of the VBA code that Record button generated, which led to me just straight writing VBA for everything and eventually abandoning the Record feature entirely. I had a similar experience later on with AutoHotKey. What are the analogous aspects of Autotab to this? Also, to what extent is hand-manipulating the underlying automation possible and/or necessary to get optimal results?
Currently there is a bit of a learning curve for training Autotab to be really reliable in hard cases. We expect we’ll be able to decrease significantly in the next few months, as we get models to do more of the thinking about how to best codify a given task solution/workflow. As an intuition pump for why we expect such rapid progress: in the scenario you described you’d just have a model write the VBA code for you.
I tried it out on a workflow I've been manually piecing together and it gave me a bunch of "Error encountered, contact support" messages when doing things like clicking on a form input field, or even a button.
The more complex "Instruction" block worked correctly instead (literally things like "click the "Sign In" button), but then I ran out of the 5 minutes of free run time when trying to go through the full flow. I expect this kind of thing will be fixed soon, as it grows.
In terms of ultimate utility, what I really want is something which can export scripts that run entirely locally, but falling back to the more dynamic AI enhanced version when an error is encountered. I would want AutoTab to generate the workflow which I could then run on my own hardware in bulk.
Anyway, great work! This is definitely the best implementation I've seen of that glimpsed future of capable AI web browsing agents.
curious what you mean by generating the workflow that you run on your own hardware? Is this different than running Autotab locally?
My other request is probably not in line with your business model. I get the sense that Autotab is always communicating with some server on your end, probably for the various bits of AI functionality. What I was asking for is the ability to export the actions/workflow as, say, a python script (like a Selenium script, or even better, a script which drives your browser) which performs the actions in the Autotab workflow.
I need AI understanding when creating the workflow, or healing in case of an error, but I don't always need it when just executing a prepared script. In those (non AI needed) cases, I don't really want to use up my runtime minutes just because I'm executing a previously generated workflow.
I can’t overstate how much having a robust system for breaking down tasks and iterating on them has helped us.
For one of our recent projects, we had to integrate complex workflows with third-party systems, and it was clear that reliability came down to how well we could define and refine intent over time.
I’m especially curious about your self-healing automations. That’s an area where we’ve found a lot of value using models that can adapt to subtle UI changes, but it’s always a tradeoff with latency. Would love to hear more about how you balance that in production!
Looking forward to trying Autotab and seeing how it compares with some of the internal tools we’ve built!
Which layer is the automation happening? Inside using Dev tools? Multiple?
What is the self-healing mechanic? I'm guessing invoking an LLM to find what happened and fix it?
I guess what I'm wondering is. Is this some sort of hybrid between computer use and Dev tools usage?
For instance, if Autotab is trying to click the "submit" button on a sparse page that looks like previous versions of that page, that click might take a few hundred milliseconds. But if the page is very noisy, and Autotab has to scroll, and the button says "next" on it because the flow has an additional step added to it, Autotab will probably escalate to a bigger model to help it find the right answer with enough certainty to proceed.
There is a certain cutoff in that hierarchy of compute that we decided to call "self-healing" because latency is high enough that we wanted to let users know it might take a bit longer for Autotab to proceed to the next step.
That's disappointing as the devtools approach always has limitations.
Kura agents, Runner H, and scrapybara will all end up more reliable than you.
You can also use Anthropic’s Computer Use model directly in Autotab via the instruct feature - our users find it most helpful for handling specific subtasks that are complex to spell out, like picking a date in a calendar.
Can I point it at my own LLM or am I locked into using OpenAI?
Right now we do not let you BYO llm, but it's something we would love to provide an option for where possible!
A lot of AI tools promise the world and don't deliver. We explicitly don't want anyone to pay us until they're sure Autotab can do their task, even though the model costs during editing are actually much higher than during runtime.
Haven’t done a lot with Scribe-like documentation cases. Given the pace at which this technology is developing we’re focused on making Autotab really good at the most economically valuable tasks.
If you wanted Autotab to reconcile payments you would teach it to go to wherever the payments are listed eg a banking app. There you would have it iterate through the unreconciled payments. For each payment you’d have Autotab go to the invoicing tool and look up any details from the payment (eg IBAN, information from the reference number, amount, etc) to find the matching customer and invoice. This is where most of the reasoning happens - you can teach Autotab what counts as sufficiently close to be a match with prompts and examples. Then you can have Autotab mark the invoice as paid and go back to the payment app and mark the payment with the invoice number it grabbed from the matched payment.
Usage Information. To help us understand how you use our Services and to help us improve them, we automatically receive information about your interactions with our Services, like the pages or other content you view, the searches you conduct, and the dates and times of your visits.
Desktop Activity on our Services. In order to provide the Services, we need to collect recordings of your desktop activity while using our Services, which may include audio and video screen recordings, your cookies, photos, local storage, search history, advertising interactions, and keystrokes.
Information from Cookies and Other Tracking Technologies. We and our third-party partners collect information using cookies, pixel tags, SDKs, or other tracking technologies. Our third-party partners, such as analytics partners, may use these technologies to collect information about your online activities over time and across different services.
[...]
How We Disclose the Information We Collect
Affiliates.We may disclose any information we receive to any current or future affiliates for any of the purposes described in this Privacy Policy.
Vendors and Service Providers. We may disclose any information we receive to vendors and service providers retained in connection with the provision of our Services.
Autotab has a structured type system underlying the workflows, so any data processed in the course of an automation can be referenced in later steps. It's a bit like a fuzzy programming language for automation, and the model generates schemas to ensure data flows reliably through the series of steps.
For example, users often start by collecting information in one system (using an extract step as you mentioned), then cross reference it in another and then submit some data by having Autotab type it into a third system. In Autotab, you can just type @ to reference a variable, each step has access to data from previous steps.
At the end, you can get a dump of all of Autotab's data from a run as a JSON file, or turn specific arrays of data into CSV files using a table step.
You can schedule skills in Autotab to run at arbitrary frequency.
How does this not violate TOS? Do you have legal protection set up from megacorps trying to bully you with legal threats?
Automation despite TOS via Adversarial Interop should be a Digital Human Right. Godspeed.
As more and more AI Agent enabled tooling comes out, this will become a bigger issue (the fact that people are automating these services against the TOS) so it's good if everyone who can get legal help has and shares the tactics to fight back against any civil TOS-based legal threats so we are all protected.
For a case with lots of requests how does Autotab handle ip-blocking? Does each run use a different portal instance?
Who is your vendor for residential proxies? That’s quite a sketchy industry.
For 2FA, different users take different approaches. Everything from teaching Autotab to pull auth codes from their email, to setting intervention requests at the top of their skills, to enterprise integrations that we support with SSO and dedicated machine accounts.
Autotab also has the ability to securely sync session data from your local app to cloud instances. This usually removes the need for doing 2FA again for sites with “remember this device” functionality.
We can enable captcha solving for select customers, but don’t allow that in the public app to prevent abuse.
After you've done that, the API is great for cases where you want to incorporate Autotab into a larger data flow or product.
For instance, say Company A has taught Autotab to migrate their customers' data - so their customers just see a sync button in the Company A product, which kicks off a Autotab run via API. Same for restaurant booking, if you'd want that to happen programatically.
Docs are here with sample code: https://docs.autotab.com/api-reference
Here is more info on auth and security: https://docs.autotab.com/manual/security
For 2FA, different users take different approaches. Everything from teaching Autotab to pull auth codes from their email, to setting intervention requests at the top of their skills, to enterprise integrations that we support with SSO and dedicated machine accounts.
If the modal pops up frequently you can also record an click to dismiss it and make that click optional so Autotab knows to move on if the modal does not pop up sometimes.
Is "learning", used as a noun, a term of art in this field?
If not, my reactioning to that using is that it is a being bad English that causes producings of gratings on the ears.
Source: https://scholar.google.com/scholar?start=0&q=%22learnings+fr...
https://news.ycombinator.com/item?id=40225546
I cold-watched only half of it, without reading any info on the project, but that’s how everyone does it, I guess.
But I get the idea. Automate by example with automatic scenario builder and fuzzy matching ui via ai.
As someone who works in automation, I (again, blindly) suggest looking into anti-detection and human behavior like mouse movements, typing errors and pauses, because that’s what your (and all ours) main enemy will be in the next decade.
All in all, this is in high demand, afaiu. I tend to use a classic ML approach for that (avoiding browser automation cause it obviously only works in a browser and limits/divides the area of application), but would love to try something that self-heals on site changes. Although I think I’d better use something that can detect changes and reconfigure my ML params rather than using it directly, cause I don’t really trust modern AI to free-float in runtime, and also costs.
Autotab has exited due to multiple fatal errors. Please contact support for assistance: [email protected].
What extension would you like to automate?
Urgh. I was excited about this. Anxiously awaiting email/other SSO (we use MS).