Just like KIND runs containerd inside docker, you can also run dockerd inside containerd backed pods.
Start a privileged pod with the dind image, copy or mount your compose.yaml inside and you should be able to docker compose up and down, all without mounting a socket (that won't exist anyway on containerd CRI nodes)
To go even further, kubevirt runs on kind, launch a VM with your compose file passed in via cloud-init.
So this uses k3s underneath. IMO any local kubernetes distribution is a big resource hog over plain docker. Anyone have ideas for something that is less resource intensive but easier to orchestrate than docker-compose?
I was just telling some ex coworker friends that there was a great need for a compose frontend to more powerful infra backends, and this feels like the answer.
Once I get working on it I’ll try to add health check support. That is crucial for a lot of what we’re working on.
This is a personal project that im open-sourcing. Its one of those projects-that-should-exist-but-nobody-wants-to-kill-their-business.
It takes ur standard docker compose file and runs it transparently in kubernetes (k3s actually). So ur devs don't have cognitive dissonance between testing ur stack locally on ur laptop and making it work on kubernetes in production.
It is primarily meant as a dev tool on ur laptop, and as a replacement for docker compose.
I'm not quite sure what level of testing this facilitates. If you're testing as close to production as possible, you probably want templated k8s config that scales down to a k8s in CI (e.g. Helm with variables applied that make it minimal). If you just want a local stack to test components and not the k8s config, why not just use docker compose itself?
docker compose is beautiful because it uses a simple elegant compose yml file - this is now an open standard. https://www.compose-spec.io/
the standard does not make it mandatory that underlying system should be docker compose (the reference server). it can be anything.
IMHO - kappal is the first project that takes your compose yml file and transparently/drop-in runs it on kubernetes. there is nothing extra you need. It is useful for people who want to maintain their stack as close to production as possible (kubernetes).
If that's not a big goal for you, then this is not very useful for you. But I'd argue ...why do you care if the compose yml is the only think you are using. you get all of kubernetes.
Could I use this for running the same docker compose stack multiple times in parallel? I wrote a lot of bash glue code to make this happen (without kubernetes) for integration and acceptance testing on a single server. Managing envs and networking was a pain, but mostly, I struggle to keep it up to date with infrastructure changes in my platform.
With a single Tilt file combined with a docker compose file, almost all of the infrastructure you need is configured on a local machine. It also supports running kubernetes (most of the docs are around this), but you do not necessarily need to it it.My goto when I have more then 2 docker containers/services I want to keep changing code for. Some teams I work with usually have 20 such containers for local dev.
And yes, you can even nest Tilt files and even write normal python if you want to mix things up.
I've just moved on from docker compose. Instead I have a K8s like yaml file and use podman kube play. The learning curve is pretty small in my opinion and at least it is a little closer to production.
Start a privileged pod with the dind image, copy or mount your compose.yaml inside and you should be able to docker compose up and down, all without mounting a socket (that won't exist anyway on containerd CRI nodes)
To go even further, kubevirt runs on kind, launch a VM with your compose file passed in via cloud-init.
I was just telling some ex coworker friends that there was a great need for a compose frontend to more powerful infra backends, and this feels like the answer.
Once I get working on it I’ll try to add health check support. That is crucial for a lot of what we’re working on.
It takes ur standard docker compose file and runs it transparently in kubernetes (k3s actually). So ur devs don't have cognitive dissonance between testing ur stack locally on ur laptop and making it work on kubernetes in production.
It is primarily meant as a dev tool on ur laptop, and as a replacement for docker compose.
the standard does not make it mandatory that underlying system should be docker compose (the reference server). it can be anything.
IMHO - kappal is the first project that takes your compose yml file and transparently/drop-in runs it on kubernetes. there is nothing extra you need. It is useful for people who want to maintain their stack as close to production as possible (kubernetes).
If that's not a big goal for you, then this is not very useful for you. But I'd argue ...why do you care if the compose yml is the only think you are using. you get all of kubernetes.
With a single Tilt file combined with a docker compose file, almost all of the infrastructure you need is configured on a local machine. It also supports running kubernetes (most of the docs are around this), but you do not necessarily need to it it.My goto when I have more then 2 docker containers/services I want to keep changing code for. Some teams I work with usually have 20 such containers for local dev.
And yes, you can even nest Tilt files and even write normal python if you want to mix things up.
if you ever wanna try it again - use kappal. you will get a full k8s but with the UX of docker compose.
So is any of this tested?
[0] https://github.com/sandys/kappal/tree/main/test