Docker is a genius idea which looks obvious in retrospect, but someone need to invent it.
Docker is more than just chroot. You also need: overlay file system; OCI registry and community behind it, to create thousands of useful images. And, of course, the whole idea of creating images layer by layer and using immutable images to spawn mutable containers.
I don't actually think that you need network or process isolation. In terms of isolation, chroot is enough for most practical needs. Network and process isolations are nice to have, but they are not essential.
I was a very early adopter of Docker and what sold me was Dockerfiles.
A SINGLE regular text file that took regular shell commands and could build the same deployment from scratch every time and then be cleaned up in one command.
This was UNHEARD of. Every other solution required learning new languages, defining “modules,” creating sets of scripts, or doing a lot of extra things. None of that was steezy.
I was so sold on Dockerfiles that I figured that even if the Docker project died, my Dockerfiles would continue to live because other people would try copy the idea of Dockerfiles. Now it’s been 10 years and Docker and containerization has changed a lot but what hasn’t? Dockerfiles. My 10 year Dockerfiles are still valid. That’s how good they were.
You can bind your application to 127.0.0.2 for one container and to 127.0.0.3 for another container. Both can listen on port 80 and both can communicate with each other. And you can run another container, binding to 1.2.3.4:80 and using it as reverse-router. You can use iptables/nftables to prevent undesired connections and manually (or with some scripting) crafted /etc/hosts for named hosts to point to those loopback addresses. Or just DNS server. It's all doable.
The only thing that you need is the ability to configure a target application to choose address to bind to. But any sane application have that configuration knob.
Of course things are much easier with network namespaces, but you can go pretty far with host network (and I'd say it might be easier to understand and manage).
You can see why people like the docker experience, you can manage to do all that in a single interface, instead of one off scripts touching a ton of little things
i tried to build at work something like docker around 2003-2004. was trying to solve problem of distribution/updates/rollblacks of software on network appliances that we made. overlay filesystems back then were immature/buggy so it went nowhere. loopback mounted system was not sufficient (don't remember why)
What I always wondered is why qcow2 + qemu never gave rise to a similar system, they support snapshots/backing-files so it should be possible to implement a system similar to docker? Instead what we got is just this terrible libvirt.
> Nydusify provides a checker to validate Nydus image, the checklist includes image manifest, Nydus bootstrap, file metadata, and data consistency in rootfs with the original OCI image. Meanwhile, the checker dumps OCI & Nydus image information to output (default) directory.
The LXC approach is to run systemd in the container.
The quadlet approach is to not run systemd /sbin/init in the container; instead create .container files in /etc/containers/systemd/ (rootful) or ~/.config/containers/systemd/*.container (for rootless) so that the host systemd manages and logs the container processes.
> LXD provides both [QEMU,] KVM-based VMs and system containers based on LXC – that can run a full Linux OS – in a single open source virtualisation platform. LXD has numerous built-in management features, including live migration, snapshots, resource restrictions, projects and profiles, and governs the interaction with various storage and networking options.
You can run Podman or Docker within an LXD host; with or without a backing storage pool. FWIU it's possible for containers in an LXD VM to use BTRFS, ZFS, or KVM storage drivers to create e.g. BTRFS subvolumes instead of running overlayfs within the VM by editing storage.conf.
This, better yet just use systemd-nspawn. Benefits of proper containers, configuration similar to any ol systemd service, super easy to use, simple to automate builds with mkosi.
The one thing people really seem to miss on them is like, contrary to popular belief you dont need a whole OS container there, minimal distroless containers work just fine with systemd-nspawn similar to as they would on docker.
FreeBSD has had jails since version 4 (~year 2000), fwiw.
Much of the technology was there, but Docker was able to achieve a critical mass, with streamlined workflows. Perhaps as much a social phenomenon as a technical one?
Don't discount the technical innovation required to integrate existing technologies in a novel and useful way. Docker was an "off the shelf" experience unlike any other solution at the time. You could `docker run ...` and have the entire container environment delivered incrementally on demand with almost no setup required. It did have a social factor in that it was easy for people to publish their own images and share them. Docker Hub was provided as a completely free distribution service. The way they made distribution effortless was no doubt a major factor in why it took off.
I used FreeBSD on my firewall in the early 2000s, and on my NAS from around 2007 till last year.
The big pain with jails for me was the tooling. There was a number of non-trivial steps needed to get a jail that could host a networked service, with a lot that could go wrong along the way.
Sure a proper sysadmin would learn and internalize these steps, but as someone who just used it now and again it was a pain.
Way down the line things like iocage came along, but it was fragile and not reliable when I tried it, leading to jails in weird states and such.
So I gave up and moved to Linux so I could use Docker.
Super easy to spin up a new service, and fairly self-documenting as you just configure everything in a script or compose file so much less to remember.
Initially in a VM on Bhyve, now on bare metal.
It feels a bit sad though, as jails had some nice capabilities due to the extra isolation.
Yeah it really was a social phenomena. Ten years ago conferences were swarmed with docker employees, swag, plenty of talks and excitement.
The effort to introduce the concepts to the mainstream can’t be understated. It seems mundane now but it took a lot of grassroots effort and marketing to hit that critical mass.
Docker is more than just chroot. You also need: overlay file system; OCI registry and community behind it, to create thousands of useful images. And, of course, the whole idea of creating images layer by layer and using immutable images to spawn mutable containers.
I don't actually think that you need network or process isolation. In terms of isolation, chroot is enough for most practical needs. Network and process isolations are nice to have, but they are not essential.
A SINGLE regular text file that took regular shell commands and could build the same deployment from scratch every time and then be cleaned up in one command.
This was UNHEARD of. Every other solution required learning new languages, defining “modules,” creating sets of scripts, or doing a lot of extra things. None of that was steezy.
I was so sold on Dockerfiles that I figured that even if the Docker project died, my Dockerfiles would continue to live because other people would try copy the idea of Dockerfiles. Now it’s been 10 years and Docker and containerization has changed a lot but what hasn’t? Dockerfiles. My 10 year Dockerfiles are still valid. That’s how good they were.
process isolation is less prominent
The only thing that you need is the ability to configure a target application to choose address to bind to. But any sane application have that configuration knob.
Of course things are much easier with network namespaces, but you can go pretty far with host network (and I'd say it might be easier to understand and manage).
containerd/stargz-snapshotter: https://github.com/containerd/stargz-snapshotter
containerd/nerdctl//docs/nydus.md: https://github.com/containerd/nerdctl/blob/main/docs/nydus.m... :
nydusify and Check Nydus image: https://github.com/dragonflyoss/nydus/blob/master/docs/nydus... :
> Nydusify provides a checker to validate Nydus image, the checklist includes image manifest, Nydus bootstrap, file metadata, and data consistency in rootfs with the original OCI image. Meanwhile, the checker dumps OCI & Nydus image information to output (default) directory.
nydus: https://github.com/dragonflyoss/nydus
awslabs/soci-snapshotter: https://github.com/awslabs/soci-snapshotter ; lazy start standard OCI images
/? lxc copy on write: https://www.google.com/search?q=lxc+copy+on+write : lxc-copy supports btrfs, zfs, lvm, overlayfs
lxc/incus: "Add OCI image support" https://github.com/lxc/incus/issues/908
opencontainers/image-spec; OCI Image spec: https://github.com/opencontainers/image-spec
opencontainers/distribution-spec; OCI Image distribution spec: https://github.com/opencontainers/distribution-spec
But then in the
opencontainers/runtime-spec//config.md OCI runtime spec TODO bundle config.json there is an example of a config.json https://github.com/opencontainers/runtime-spec/blob/main/con...
The LXC approach is to run systemd in the container.
The quadlet approach is to not run systemd /sbin/init in the container; instead create .container files in /etc/containers/systemd/ (rootful) or ~/.config/containers/systemd/*.container (for rootless) so that the host systemd manages and logs the container processes.
Then realized you said QEMU not LXC.
LXD: https://canonical.com/lxd :
> LXD provides both [QEMU,] KVM-based VMs and system containers based on LXC – that can run a full Linux OS – in a single open source virtualisation platform. LXD has numerous built-in management features, including live migration, snapshots, resource restrictions, projects and profiles, and governs the interaction with various storage and networking options.
From https://documentation.ubuntu.com/lxd/latest/reference/storag... :
> LXD supports the following storage drivers for storing images, instances and custom volumes:
> Btrfs, CephFS, Ceph Object, Ceph RBD, Dell PowerFlex, Pure Storage, HPE Alletra, Directory, LVM, ZFS
You can run Podman or Docker within an LXD host; with or without a backing storage pool. FWIU it's possible for containers in an LXD VM to use BTRFS, ZFS, or KVM storage drivers to create e.g. BTRFS subvolumes instead of running overlayfs within the VM by editing storage.conf.
The one thing people really seem to miss on them is like, contrary to popular belief you dont need a whole OS container there, minimal distroless containers work just fine with systemd-nspawn similar to as they would on docker.
https://en.wikipedia.org/wiki/Solaris_Containers
Much of the technology was there, but Docker was able to achieve a critical mass, with streamlined workflows. Perhaps as much a social phenomenon as a technical one?
https://www.youtube.com/watch?v=wW9CAH9nSLs
The big pain with jails for me was the tooling. There was a number of non-trivial steps needed to get a jail that could host a networked service, with a lot that could go wrong along the way.
Sure a proper sysadmin would learn and internalize these steps, but as someone who just used it now and again it was a pain.
Way down the line things like iocage came along, but it was fragile and not reliable when I tried it, leading to jails in weird states and such.
So I gave up and moved to Linux so I could use Docker.
Super easy to spin up a new service, and fairly self-documenting as you just configure everything in a script or compose file so much less to remember.
Initially in a VM on Bhyve, now on bare metal.
It feels a bit sad though, as jails had some nice capabilities due to the extra isolation.
The effort to introduce the concepts to the mainstream can’t be understated. It seems mundane now but it took a lot of grassroots effort and marketing to hit that critical mass.