I really want to love this, but my experience in the first 20 seconds is unfortunately like some of my other experiences coding against Fly APIs, they're broken.
can I live with some rough edges for some personal workflows that only impact me when things break? sure. however, I was thinking about playing with some CI/CD stuff using sprites that would impact our whole team if things broke and I'm really on the fence because of this experience in the first 20 seconds.
Fly team - please put some black box probes or just better testing on the example you give in the quick start. if you document it, test it.
I wish more companies had open issue trackers (some proprietary software have issues on Github for example, but, it doesn't need to be Github, just let people discuss issues in the open)
I'm really excited about https://sprites.dev/ - it hits two of my favourite problems at once:
1. Developer environment sandboxes. This is a cheap and convenient way to run Claude Code / Codex CLI / etc in YOLO mode in a persistent sandboxed VM with a restricted blast radius if something goes wrong.
2. Sandbox API. Fly now have a product that lets me make a simple JSON API call to run untrusted code in a new sandbox. There's even snapshotting support so I can roll back to a known state after running that code.
BTW Simon, I was super happy when I heard on Theo's podcast that he will be encouraging you to monetise your work more. I'm super appreciative of your work and I'm pretty convinced that the more you profit from it, the better the universe will be!!!
This looks great, i've been wanting a dev sandbox that doesn't run the risk of costing a lot if I forget to turn it off.
I had a few issues
1. manpath: can't set the locale; make sure $LC_* and $LANG are correct
suspect this is due to it inheriting locale from my local machine? easy to get around with some updates to .bashrc
2. the $SHELL environment in my sprite is `/opt/homebrew/bin/fish` I use fish on my local (mac + homebrew) machine and it seems to have inherited from my local machine, its nice to be using fish in the sprite, but seems weird that $SHELL in the sprite points to non-existent path. Slightly concerning that a local env var is being transferred to a remote machine without my explicit permission, I have some sensitive env vars locally.
Philosophically, I like Fly and have been a customer since very early on.
That said, I dread having to do anything CLI related, which for hobby projects is like once every few weeks.
Glancing at the docs for Sprite, I worry that this will be another CLI where a good 95% of the time that I go to invoke a command, my workflow is interrupted by an auto-updater that takes longer than whatever interaction I'm trying to do and derails my train of thought.
As I was reading this I was a bit confused by the issues they mention, but at work I use Claude SSHed to a persistent dev server and I’d be annoyed if I didn’t have eg my git repos there all the time or any part of that workflow was ephemeral. I’m not really aware of what everyone else is doing with sandboxes etc.
But the bit at the end with the MDM server made it click for me. I’ve started generating tiny iOS apps for personal software stuff, because they solve data storage better than the web (at least on iOS). A database on some other server seems like a bad fit/overkill for this stuff, client side storage is too flaky because Safari. But iOS apps are limiting in their own annoying ways compared to web apps.
This looks like a really interesting solution, I can just store the data on a sprite with SQLite or whatever. Visit its URL to use my app, then does it go away on its own after a short time? I could have done that before with a server with storage, but this seems easier/probably cheaper.
If this works well/the way I’m hoping it might be the sweet spot for simple personal software that needs persistent data and you want to run anywhere.
One feature that would make this really nice is if it could have something like Vercel preview environments, where I need to auth my fly account to view the URL. That'd solve the public URL without me needing to do my own auth thing in every app.
I know it's one me for thinking this -- since the domain is fly.io -- but I was really hoping this is some local solution.
Not self-hosted, but just local. A thin command line wrapper to something (docker? bubblewrap?) that gave me sort of a containerized "VM" experience for my local machine using CoW.
This is seriously cool - it's exactly the DX and API I've been waiting for from sandboxed execution providers.
I'd love to be able to configure the base image/VM in a way that doesn't bundle coding tools or anything else I don't need, and comes with some other binaries installed (I'm more interested in using this as an API for a sandbox use-case I have). Is there a way to do this at the moment / is this on the roadmap?
Another option would be configuring the sprite via checkpoint and then cloning the checkpoint from a base sprite, but I don't see this option anywhere either.
Yes! It would be kinda cool to have the ability to docker-deploy (think the fly method even -- just to get your sprite on its feet the way YOU want it) a base sprite image and then just go from there in the normal sprite way from then on.
I've been having so much fun working on sprites (and working with sprites) the last the several months. There's some neat parts of the Elixir side of this we're going to open source soon.
One of the coolest things about this is that Claude in his environment --- without him asking to --- knows how to drive Sprites. If you ask it to run a server, it will register it as a local service so it survives reboots. Without you asking to, it'll checkpoint when it makes big changes. I think this is kind of freaky.
I can't say enough how, if you're using this like Kurt and Chris have been, you have like, a dozen sleeping Sprites in your Sprite list. If you're not doing anything with them, they're not really costing you anything. When you want to do something new, there's no point figuring out which of your existing Sprites to do it on. Just make a new one.
Always having a sane place to run anything I happen to be doing, without making any decisions, it's a weird feeling.
That’s a great demo! For curious mere mortals, are all those custom instructions that make Claude know how to use it public? I’d like to learn how to drive it myself too, just out of curiosity!
You pay for the storage you actually use (not the raw capacity). If you build, like, a relatively complicated Python web service with some assets, and all the build deps that go with that, you might be on the hook for, like, 90 cents in a month.
I might have missed this in the docs, but is there a way to fork/clone a sprite, or restore a checkpoint into a new one?
Use cases: set up my preferred env in one sprite and use that as a template for others; or fire off a few independent sprites with claude code exploring alternative solutions, then choose a winner and reap the rest.
It's coming, and it'll make sense how and why next week when I run the "how this shit works" post.
I actually pushed to include it in the launch release. You'd have to ask Kurt why he didn't, but I think the idea is just to get more real-world usage first.
Do you expect that to replace git worktree for getting Claude to work on multiple things in parallel? That was something I was curious about watching the demo video.
I saw this headline, saw the tweets and missed what this was about.
Then read Simon Willison's breakdown and got the 'Aha!'.
I like what they've done, played with it and immediately started to plan how I'd try to implement it myself.
I guess this will be the way to go, for development setups instead of using a dedicated machine. Especially when mobile clients are created for Sprites.
Wow, this looks absolutely fantastic. Can't wait to take it for a spin. I'm actually surprised it isn't seeing more traction here!
In particular, I'm really excited about the extremely fast start up time and checkpointing. I'm curious if anyone knows any alternatives in this space?
> Claude is a hyper-productive five-year-old savant. It’s uncannily smart, wants to stick its finger in every available electrical socket, and works best when you find a way to let it zap itself.
This seems cool, but beware that Fly's other products are not exactly models of stability and polish.
API downtime is a semi-frequent occurrence, as are transient API errors and slowness.
I've also had a ticket open with support for weeks due to rampant billing issues. For instance, a destroyed instance still shows up in my usage report as actively accruing billed time, and at a rate faster than is even possible (something like 2 hours for every 1 actual hour that has passed.)
They've released two new products in the AI space, this and Phoenix.new, and my worry is that they are focused on new products over making what they have good and reliable.
> When you start a feature branch on your own, do you create an entirely new development environment to do it?
… yes? We have a few wrapper scripts around worktree operations that copy some docker volumes (pg data, bundle cache, etc.) from the base and spins up an entirely new stack on different ports with a host alias. We don’t have to install any deps beyond that because we copied over the ruby gems bundle cache and we’re using Yarn PnP + “zero installs” for client-side deps.
> There are some important million-person apps, but most of them just destroy civil society, melt our brains, and arrange chauffeurs for individual cheeseburgers.
All the cool technical stuff aside - this, for me, was the standout line of the article
AFAIK fly.io run firecracker and cloud-hypervisor VMs. This seems to have a copy-on-write filesystem underneath.
Given their principled take on only trusting full-VM boundaries, I doubt they moved any of the storage stack into the untrusted VM.
So maybe a virtio-block device passing through discard to some underlying CoW storage stack, or maybe virtio-fs if it's running on ch instead of fc? Would be interesting to hear more about the underlying design choices and trade-offs.
Edit: from their website, "Since it's just ext4, you won't run into weird edge cases like you might with NFS or FUSE mounts. You can happily use shared memory files, for example, so you can run SQLite in all its modes." So it's a virtio block device supporting discard that's exposed to the VM. Interesting; fc doesn't support virtio discard passthrough, and support for ch is still in progress...
I have a post coming next week about the guts of this thing, but I'm curious why you think we'd avoid running the storage stack inside the VM. From my perspective that's safer than running it outside the VM.
My impression is that you (very reasonably) treat anything inside the VM as untrusted. If you want trusted rollback, presumably that implies that the VM can't have any ability to tamper with the snapshot?
But maybe you have parts of the stack that don't need to be trusted inside the VM somehow? Looking forward to the article.
fly.io is doing really good work. I've super enjoyed building our product on their platform. I love fly-replay combined with super fast start-up.
I've been thinking a lot about how to run agents (and skills) securely while giving them a lot of powerful capabilities.
I recently used their macaroons library to turn arbitrary API keys (e.g. for stripe's API) into macaroons. I route requests for an upstream host (like stripe) through Envoy as a mitm proxy which injects the real creds after verifying the macaroon.
It is such a powerful pattern. I'm always worried about leaking sensitive keys through prompt injection attacks (or just sending them to anthropic), but in this model you can attenuate the keys (both capabilities & validity window) client side. The Envoy proxy lives inside my flycast network so it can't be accessed externally.
It would be so cool if fly built something like this into sprites.dev (though I can see how it would be spooky to have fly install their own certs for stripe, etc...)
My use case is very similar, but I wanted a transparent proxy so I could run unmodified scripts. It is a tricky design decision though.
I also mount a little fuse filesystem that mints macaroon on read (with a shorter lifetime, probably inspired by y'all but i forget from where).
I work on realtime collaboration of markdown files (currently in Obsidian), which has become a shared-context substrate for agents, skills, etc.. Our own company workspace has skills that have scoped access to fly, stripe, gmail, etc. We're definitely drinking the file-over-app personal-software-for-teams Kool-Aid, so the problem space for us includes access control and auditing.
We have enough control over the execution environment in a Sprite (unlike a Fly Machine, where the implied Linux contract we have with our users gets in the way) that we can trivially hide explicit proxies.
We can also attach Macaroons to Fly Machines and Sprites for configurable ambient privileges, something I've wanted us to expose as a feature for a very long time.
Awesome, i look forward to that. I think that could be a major differentiator for sprites. I wish i could work on that problem at fly.io scale.
What is the contract with sprites? Is it just built-with-linux but not promising Linux? Or is it more like a machine but y'all control the container image?
There's no "formal" contract in either place but people running on Fly Machines expect that there's nothing at all between them and the kernel, and we don't have that expectation in Sprites; we can do whatever we want. :)
I don't want to get too far into the rest of the details only because I'm writing this up for next week. They're not that interesting technically, but they're a really big deal for us in other ways.
* Automatic spin-down scale-to-zero, so you're not paying for it when it's not in use.
If you're using these like we are internally, you've got like 2 dozen of them sitting around in the background sleeping. They're BIC disposable computers. "When in doubt just make another one."
"Containers" are that, and fast, in part because they share kernels, so there's no serious rebooting happening. But the consequence of that design is you share a kernel with untrusted cotenants.
And then there's just the idea of being able to pull these out of the sky literally whenever you want one. If you want to try something new out real quick, it makes no sense to figure out which of your existing Sprites to use. Just make a new one. If you're a little OCD, like I am, every once in awhile you can go prune, if you really care.
The post says "hardware isolated" but below in the sandbox it says firecracker, which I thought were supposed to be a secure way to run containers from multiple tenants on a single host. Also I thought Fly machines were already using firecracker.
I'm having trouble understanding the difference to Fly machines. If you spin up a Debian container on a machine with a persistent volume, doesn't that have everything this does? Is this about providing a layer of useful configuration/management software on top?
something that isn’t clear to me: what’s the billing when i’m not actively using a sprite? does that go to zero as well, or am i still being billed for storage?
If it's similar to cloudflare, then it should be usage based. That is you only pay for what is active. (ie: if you are running a task that is waiting on network for 1 hour, you don't pay for cpu but your app is loaded and you are paying for memory). So if your app is dormant (not using cpu or memory), you only pay for the storage you are using.
yeah reading further into the docs it looks like that’s the model. storage is pretty cheap, $.00068/gb-hr, so a 100GB disk runs you about 1.6 cents per day.
That's roughly what Cloudflare containers are right? (with migrations being the checkpoints?). Cloudflare containers are also nearly instant and have scale-to-zero pricing. The only difference here is the CLI?
Your pricing looks competitive on compute but roughly 4-5 times more expensive on memory and double on storage.
I wonder the same thing. What’s so different than your own vps and using lxd to create a container. Make two bash aliases and wow you can go in and out quickly and recreate it with one command.
If you have an LXD setup working for your own workloads that's working well for you, that's awesome. Why would we want to talk you out of that? Fundamentally you're getting at the difference between "elastic" cloud services and personal infrastructure. Personal infra is great!
If it helps: Jerome has been working for a couple months on a local, open-source Rust version of Sprites, so you can use the same DX with your own infrastructure. We just think this is the right "shape" for modern sandboxes, wherever you actually run them.
Playing around with this for a small amount of time, it is very neat but also there are a bunch of things that are unclear / undocumented (I assume the documentation is coming so I'm not faulting them for it not being there yet).
Some things that are unclear:
- How should I auth to github? sprite console doesn't use ssh (afaik) so I guess not agent forwarding?
- What on machine api's are available? Can I use the fly oidc provider[1]? There's a /.sprite/api.sock but curl'ing /v1/tokens/oidc gets a 404.
- How much is it going to cost me? I know there is pricing but its hard to figure out what actual usage would be like. Also I don't see any usage info in the webui right now.
Don't think of this as in any way connected to the Fly Machines API. For now, just take it on its own terms. We'll have an open-source local version of it relatively soon, if that clarifies anything.
To follow up on this a bit, something that I really want is a way to build and launch apps from an llm really easily. I am imagining and environment with a database, object storage, and a publicly reachable webserver. I think this could be that with OIDC auth to an s3 bucket and litestream.
I was previously thinking about doing the same thing on my homeserver with tailscale to expose the web interface publicly and tailscale oidc auth to an s3 bucket for object storage.
i believe the .sprite dir has some stuff to help claude answer those questions. haven’t done it myself but my friend said he was able to get claude to set it all up for him (yolo mode helps) including connecting to github.
I want something like this, but running on my own box. I now have a Linux box with plenty of RAM and storage under my desk. (It happens to be an NVIDIA DGX Spark, but I'm not really interested in passing the GPU through to these sandboxed VMs; I know that's not practical anyway.) Maybe I'll see if I can hack together a local solution like this using Firecracker.
This seems cool but maybe not for a production setting requiring concurrency? I just signed up on PAYG which offers 3 concurrent sprites. I only see an option to upgrade to 10 concurrent sprites.
Without getting into Kurt's galaxy-brained take on the declining importance of "production" in a post-AI world, I'd say: yeah, run prod apps on Fly Machines, for more predictable performance, scaling, and pricing. Do exploratory computing --- "figuring out what you'd run on a Fly Machine" --- in Sprites.
The sprite installer got stuck after "Installed to ..." for me. After waiting a few minutes I just ctrl+ced and looked at what it does after and manually ran "sprite auth setup --token <token>" and that seems to just hang for me.
I thought fly.io snapshots weren't guaranteed to stick around? Although I can can't find the docs mentioning it, but i checked within the last few months... maybe they changed it?
Like it, a lot. I think the future of software is going to be unimaginably dynamic. Maybe apps will not have statically defined feature sets, they will adjust themselves around what the user wants and the data it has access to. I’m not entirely sure what that looks like yet, but things like this are a step in that direction.
> I think the future of software is going to be unimaginably dynamic.
>...I’m not entirely sure what that looks like yet, but things like this are a step in that direction.
This made me stop and think for a moment as to what this would look like as well. I'm having trouble finding it, but I think there was a post by Joe Armstrong (of Erlang) that talked about globally (as in across system boundaries, not global as in global variable) addressable functions?
You can do this now without an MCP, by auth'ing the `sprite` command inside of a Sprite and telling Claude to go document it for you. You can do things like "make me three versions of this feature on three different Sprites so I can compare them". It is spooky how easy it is to teach agents this stuff.
I spun one up, started a server on port 8080, ran `sprite url`, it gave me a URL, that URL just has `{ "error": "unauthorized" }`. How am I supposed to access it?
Oh, thanks, that works. ([edit] rewrote this whole post) I guess I need to install my own tunneling into the VM to do web development on it, but that's not so bad. The lack of regional support is crippling, because whatever region you put me in is ~200ms from me and the typing lag is terrible.
I'd love to adopt this for all my development (which I currently do using rented cloud instances, so I'm pretty comfortable with the remote development paradigm). I'm especially excited about the snapshot/clone pattern, and have (this past week) been researching solutions for exactly this problem.
Hope you launch multiple regions for this ASAP. Will be watching.
If you `sprite console` to it, it'll forward any ports you open to localhost. You can tunnel almost everything through the CLI with the `sprite proxy` command.
sprites.dev looks very interesting to me.
Is there a way to set up a limit to how much scaling a sprite can get, or to set a spending limit?
I wouldn't want to spin something up, and then be surprised by an unexpectedly high bill.
something simpler I've did, in the same spirit: LXC containers (using Incus) in a VM. LXC containers look and feel like VMs, but are very lightweight. And the VM they all run in provide the hard sandbox.
and when I spin up a new LXC container cloud-init sets it up with the agents and my repos inside
This has been in the works for quite awhile here. We put a long bet on "slow create fast start/stop" --- which is a really interesting and useful shape for execution environments --- but it didn't make sense to sandboxers, so "fast create" has been the White Whale at Fly.io for over a year.
Not really. One of the primary features of sprites.dev that I don't see anywhere on exe.dev is a fast way to create and restore checkpoints, like a git repo for your entire VM.
This is needed for sandboxes if you don't want to throw them away and start over when something goes wrong.
With sprites.dev you can create an additional checkpoint and then turn Claude Code (or your preferred agent) loose to do anything. Even if it burns down the sandbox you can just restore a checkpoint in about a second.
[exe.dev co-founder here] If you are curious, we have a `clone` command coming soon for sub-section creation of a new VM out of an existing VM. This is our first pass at checkpointing, rather than introducing an independent `snapshot` noun, you can keep a VM around as the snapshot.
We realize that is not going to cover all the business cases we have been discussing with customers and plan to introduce a snapshot concept (in particular for rewinding the state of a VM to an automatic backup), but we have a lot of FS work underway before we can launch it. There are some other things we want out of our VMs that we cannot do using conventional cloud techniques, so we have code to write.
Yes that’s certainly a great feature and they don’t have it currently. For what it’s worth, they do have a teaser about “Persistent disks with some really interesting work coming soon.”
I have just now learned about exe.dev and it looks awesome.
I really hate that modern development means not having persistent disk. I’m glad there are new options coming out which let you do this in and easier way than managing my own EC2 instances!
Would I think of this as an EC2 instance which automatically and quickly scales to zero, with pricing only for resources consumed? (CPU and RAM when up, and disk all the time?)
It's a fast starting and fast pausing persistent VM, with a ton of built in developer tools (including a preconfigured Claude Code) and an extra JSON API for executing commands within it so you can treat it as a sandbox.
How exactly can code agents make use of this? You install claude code inside a Sprite and run it there? Do you also need to put all your codebase in this sprite?
Claude Code is already in the Sprite; just create one and type "claude". But they have an API and Claude (or Gemini or Codex) can use them remotely too. They're disposable computers. Use them however you want.
So this is neat and useful and I think will/should get traction.
So let's say sprite is my building/dev ground floor. I get my thing/app to where I want it, but at the end of the day I think my thing/app is so awesome that it should be a production app for the whole world, and, I want to actually deploy it on fly, say.
Have you guys thought about that workflow, and what it might take to push button/migrate a sprite app over to fly?
It depends on which Fly person you talk to. If you talk to Kurt he'll try to sell you on his crazy dream of how all software is going to be malleable and "prod" doesn't mean anything anymore. If you ask me: tell Claude to make a Dockerfile of the current state of your Sprite, and then deploy it as a Fly Machine. It's a good question, and we're working out how the transition from Sprite to Fly Machine works, but that's how I'd do it today.
I don't think we're going to do anything new with GPUs any time soon.
I'm not really sure I get the value of these being remotely hosted. We're writing code on super powerful machines with hypervisors built in.
My libvirt setup does this right now, I have a little dumb cli I wrote that lets me create, start, stop, save, restore, and destroy preconfigured machines. I use it for testing provisioning scripts and playbooks. You get the full cloud experience by including a cloud-init ISO so you can ssh to it the moment it boots with my key. Didn't realize I was at the frontier of computing paradigms.
Don't get me wrong the interface fly has is super nice but it feels like the endgame isn't remote hosted computers but a nice user-friendly interface (i.e. what docker did) but it's for persistent local VMs.
You feel wrong. I would eat a bug before I ran LLM text on our blog. This one thing --- the fact that you can't negate a clause without people claiming an LLM wrote it --- this alone do I place angrily at the feet of AI.
Peoples' writing is influenced by what they read, so such a strong objection to someone suggesting that an LLM might have been involved in the text of a blog post won't fly with me.
https://sprites.dev/api has this command:
$ curl -X POST "https://api.sprites.dev/v1/sprites" \ -H "Authorization: Bearer $SPRITES_TOKEN" \ -d '{"name": "my-sprite"}'
which responds with
{"error":"name is required"}
if you use the request body in the full "Create Sprite" documentation at https://sprites.dev/api/sprites#create then it does work.
can I live with some rough edges for some personal workflows that only impact me when things break? sure. however, I was thinking about playing with some CI/CD stuff using sprites that would impact our whole team if things broke and I'm really on the fence because of this experience in the first 20 seconds.
Fly team - please put some black box probes or just better testing on the example you give in the quick start. if you document it, test it.
a "quick start" really should just work when you copy paste them.
I wish more companies had open issue trackers (some proprietary software have issues on Github for example, but, it doesn't need to be Github, just let people discuss issues in the open)
1. Developer environment sandboxes. This is a cheap and convenient way to run Claude Code / Codex CLI / etc in YOLO mode in a persistent sandboxed VM with a restricted blast radius if something goes wrong.
2. Sandbox API. Fly now have a product that lets me make a simple JSON API call to run untrusted code in a new sandbox. There's even snapshotting support so I can roll back to a known state after running that code.
I wrote more a bunch more about this here: https://simonwillison.net/2026/Jan/9/sprites-dev/
Fly's Sprites.dev addresses dev environment sandboxes and API sandboxes together - https://news.ycombinator.com/item?id=46561089 - Jan 2026 (10 comments)
https://container-use.com/quickstart
BTW Simon, I was super happy when I heard on Theo's podcast that he will be encouraging you to monetise your work more. I'm super appreciative of your work and I'm pretty convinced that the more you profit from it, the better the universe will be!!!
I had a few issues
1. manpath: can't set the locale; make sure $LC_* and $LANG are correct
suspect this is due to it inheriting locale from my local machine? easy to get around with some updates to .bashrc
2. the $SHELL environment in my sprite is `/opt/homebrew/bin/fish` I use fish on my local (mac + homebrew) machine and it seems to have inherited from my local machine, its nice to be using fish in the sprite, but seems weird that $SHELL in the sprite points to non-existent path. Slightly concerning that a local env var is being transferred to a remote machine without my explicit permission, I have some sensitive env vars locally.
That said, I dread having to do anything CLI related, which for hobby projects is like once every few weeks.
Glancing at the docs for Sprite, I worry that this will be another CLI where a good 95% of the time that I go to invoke a command, my workflow is interrupted by an auto-updater that takes longer than whatever interaction I'm trying to do and derails my train of thought.
As I was reading this I was a bit confused by the issues they mention, but at work I use Claude SSHed to a persistent dev server and I’d be annoyed if I didn’t have eg my git repos there all the time or any part of that workflow was ephemeral. I’m not really aware of what everyone else is doing with sandboxes etc.
But the bit at the end with the MDM server made it click for me. I’ve started generating tiny iOS apps for personal software stuff, because they solve data storage better than the web (at least on iOS). A database on some other server seems like a bad fit/overkill for this stuff, client side storage is too flaky because Safari. But iOS apps are limiting in their own annoying ways compared to web apps.
This looks like a really interesting solution, I can just store the data on a sprite with SQLite or whatever. Visit its URL to use my app, then does it go away on its own after a short time? I could have done that before with a server with storage, but this seems easier/probably cheaper.
If this works well/the way I’m hoping it might be the sweet spot for simple personal software that needs persistent data and you want to run anywhere.
One feature that would make this really nice is if it could have something like Vercel preview environments, where I need to auth my fly account to view the URL. That'd solve the public URL without me needing to do my own auth thing in every app.
Running IncusOS on some local hardware with ZFS underneath is a phenomenally powerful sandbox.
I'd love to be able to configure the base image/VM in a way that doesn't bundle coding tools or anything else I don't need, and comes with some other binaries installed (I'm more interested in using this as an API for a sandbox use-case I have). Is there a way to do this at the moment / is this on the roadmap?
Another option would be configuring the sprite via checkpoint and then cloning the checkpoint from a base sprite, but I don't see this option anywhere either.
Also check out the 5 min demo we put out where I walk thru some sprite basics: https://www.youtube.com/watch?v=7BfTLlwO4hw
I can't say enough how, if you're using this like Kurt and Chris have been, you have like, a dozen sleeping Sprites in your Sprite list. If you're not doing anything with them, they're not really costing you anything. When you want to do something new, there's no point figuring out which of your existing Sprites to do it on. Just make a new one.
Always having a sane place to run anything I happen to be doing, without making any decisions, it's a weird feeling.
Use cases: set up my preferred env in one sprite and use that as a template for others; or fire off a few independent sprites with claude code exploring alternative solutions, then choose a winner and reap the rest.
I actually pushed to include it in the launch release. You'd have to ask Kurt why he didn't, but I think the idea is just to get more real-world usage first.
Then read Simon Willison's breakdown and got the 'Aha!'.
I like what they've done, played with it and immediately started to plan how I'd try to implement it myself.
I guess this will be the way to go, for development setups instead of using a dedicated machine. Especially when mobile clients are created for Sprites.
In particular, I'm really excited about the extremely fast start up time and checkpointing. I'm curious if anyone knows any alternatives in this space?
This alone was worth the upvote!
API downtime is a semi-frequent occurrence, as are transient API errors and slowness.
I've also had a ticket open with support for weeks due to rampant billing issues. For instance, a destroyed instance still shows up in my usage report as actively accruing billed time, and at a rate faster than is even possible (something like 2 hours for every 1 actual hour that has passed.)
They've released two new products in the AI space, this and Phoenix.new, and my worry is that they are focused on new products over making what they have good and reliable.
… yes? We have a few wrapper scripts around worktree operations that copy some docker volumes (pg data, bundle cache, etc.) from the base and spins up an entirely new stack on different ports with a host alias. We don’t have to install any deps beyond that because we copied over the ruby gems bundle cache and we’re using Yarn PnP + “zero installs” for client-side deps.
Maybe I’ve been isolated from The World for too long, but this sounds … unhealthy.
All the cool technical stuff aside - this, for me, was the standout line of the article
Given their principled take on only trusting full-VM boundaries, I doubt they moved any of the storage stack into the untrusted VM.
So maybe a virtio-block device passing through discard to some underlying CoW storage stack, or maybe virtio-fs if it's running on ch instead of fc? Would be interesting to hear more about the underlying design choices and trade-offs.
Edit: from their website, "Since it's just ext4, you won't run into weird edge cases like you might with NFS or FUSE mounts. You can happily use shared memory files, for example, so you can run SQLite in all its modes." So it's a virtio block device supporting discard that's exposed to the VM. Interesting; fc doesn't support virtio discard passthrough, and support for ch is still in progress...
But maybe you have parts of the stack that don't need to be trusted inside the VM somehow? Looking forward to the article.
I've been thinking a lot about how to run agents (and skills) securely while giving them a lot of powerful capabilities.
I recently used their macaroons library to turn arbitrary API keys (e.g. for stripe's API) into macaroons. I route requests for an upstream host (like stripe) through Envoy as a mitm proxy which injects the real creds after verifying the macaroon.
It is such a powerful pattern. I'm always worried about leaking sensitive keys through prompt injection attacks (or just sending them to anthropic), but in this model you can attenuate the keys (both capabilities & validity window) client side. The Envoy proxy lives inside my flycast network so it can't be accessed externally.
It would be so cool if fly built something like this into sprites.dev (though I can see how it would be spooky to have fly install their own certs for stripe, etc...)
https://fly.io/blog/tokenized-tokens/
Tokenizer is an explicit proxy though right?
My use case is very similar, but I wanted a transparent proxy so I could run unmodified scripts. It is a tricky design decision though.
I also mount a little fuse filesystem that mints macaroon on read (with a shorter lifetime, probably inspired by y'all but i forget from where).
I work on realtime collaboration of markdown files (currently in Obsidian), which has become a shared-context substrate for agents, skills, etc.. Our own company workspace has skills that have scoped access to fly, stripe, gmail, etc. We're definitely drinking the file-over-app personal-software-for-teams Kool-Aid, so the problem space for us includes access control and auditing.
Love your work :)
We can also attach Macaroons to Fly Machines and Sprites for configurable ambient privileges, something I've wanted us to expose as a feature for a very long time.
What is the contract with sprites? Is it just built-with-linux but not promising Linux? Or is it more like a machine but y'all control the container image?
I don't want to get too far into the rest of the details only because I'm writing this up for next week. They're not that interesting technically, but they're a really big deal for us in other ways.
Is this just a fancy VPS like digital ocean with, https endpoint, snapshot and restore?
(Same thing goes for exe.dev)
* Near-instant creation
* Automatic spin-down scale-to-zero, so you're not paying for it when it's not in use.
If you're using these like we are internally, you've got like 2 dozen of them sitting around in the background sleeping. They're BIC disposable computers. "When in doubt just make another one."
Also "containers" always had the option to attach durable storage via bind mounts.
I still get confused by the "this isn't containers" but it's kind of similar.
Maybe I am just too caught up in semantics.
A VPS that is instant to boot, super simple automatic routing and https proxy, with snapshot and durable is a win regardless.
And then there's just the idea of being able to pull these out of the sky literally whenever you want one. If you want to try something new out real quick, it makes no sense to figure out which of your existing Sprites to use. Just make a new one. If you're a little OCD, like I am, every once in awhile you can go prune, if you really care.
I'm having trouble understanding the difference to Fly machines. If you spin up a Debian container on a machine with a persistent volume, doesn't that have everything this does? Is this about providing a layer of useful configuration/management software on top?
Your pricing looks competitive on compute but roughly 4-5 times more expensive on memory and double on storage.
If it helps: Jerome has been working for a couple months on a local, open-source Rust version of Sprites, so you can use the same DX with your own infrastructure. We just think this is the right "shape" for modern sandboxes, wherever you actually run them.
Some things that are unclear:
- How should I auth to github? sprite console doesn't use ssh (afaik) so I guess not agent forwarding?
- What on machine api's are available? Can I use the fly oidc provider[1]? There's a /.sprite/api.sock but curl'ing /v1/tokens/oidc gets a 404.
- How much is it going to cost me? I know there is pricing but its hard to figure out what actual usage would be like. Also I don't see any usage info in the webui right now.
[1]: https://fly.io/blog/oidc-cloud-roles/
I was previously thinking about doing the same thing on my homeserver with tailscale to expose the web interface publicly and tailscale oidc auth to an s3 bucket for object storage.
SQLite works great for my apps. I haven't needed object storage yet, storing files on disk is enough.
>...I’m not entirely sure what that looks like yet, but things like this are a step in that direction.
This made me stop and think for a moment as to what this would look like as well. I'm having trouble finding it, but I think there was a post by Joe Armstrong (of Erlang) that talked about globally (as in across system boundaries, not global as in global variable) addressable functions?
This is a large pain point today if you aren't technical, most of the chat interfaces just let you create frontend only apps.
It requires your api token by default.
I'd love to adopt this for all my development (which I currently do using rented cloud instances, so I'm pretty comfortable with the remote development paradigm). I'm especially excited about the snapshot/clone pattern, and have (this past week) been researching solutions for exactly this problem.
Hope you launch multiple regions for this ASAP. Will be watching.
Fo people do this? I’ve never heard of it.
You can specify a max exec time for a process when you launch it via the API.
and when I spin up a new LXC container cloud-init sets it up with the agents and my repos inside
This is needed for sandboxes if you don't want to throw them away and start over when something goes wrong.
With sprites.dev you can create an additional checkpoint and then turn Claude Code (or your preferred agent) loose to do anything. Even if it burns down the sandbox you can just restore a checkpoint in about a second.
We realize that is not going to cover all the business cases we have been discussing with customers and plan to introduce a snapshot concept (in particular for rewinding the state of a VM to an automatic backup), but we have a lot of FS work underway before we can launch it. There are some other things we want out of our VMs that we cannot do using conventional cloud techniques, so we have code to write.
https://blog.exe.dev/meet-exe.dev
I really hate that modern development means not having persistent disk. I’m glad there are new options coming out which let you do this in and easier way than managing my own EC2 instances!
Have been experiencing intermittent connection drops as well.
Would I think of this as an EC2 instance which automatically and quickly scales to zero, with pricing only for resources consumed? (CPU and RAM when up, and disk all the time?)
It's a fast starting and fast pausing persistent VM, with a ton of built in developer tools (including a preconfigured Claude Code) and an extra JSON API for executing commands within it so you can treat it as a sandbox.
You may find my writeup here useful: https://simonwillison.net/2026/Jan/9/sprites-dev/
So let's say sprite is my building/dev ground floor. I get my thing/app to where I want it, but at the end of the day I think my thing/app is so awesome that it should be a production app for the whole world, and, I want to actually deploy it on fly, say.
Have you guys thought about that workflow, and what it might take to push button/migrate a sprite app over to fly?
Also, any plans for GPU sprites?
I don't think we're going to do anything new with GPUs any time soon.
My libvirt setup does this right now, I have a little dumb cli I wrote that lets me create, start, stop, save, restore, and destroy preconfigured machines. I use it for testing provisioning scripts and playbooks. You get the full cloud experience by including a cloud-init ISO so you can ssh to it the moment it boots with my key. Didn't realize I was at the frontier of computing paradigms.
Don't get me wrong the interface fly has is super nice but it feels like the endgame isn't remote hosted computers but a nice user-friendly interface (i.e. what docker did) but it's for persistent local VMs.
Wait, what?