196 points by prakashdanish 2 months ago | 56 comments
godelski 1 month ago
I totally get these are different tools and I don't think nspawn makes docker or podman useless, but I do find it interesting that it isn't more used, especially in things you're using completely locally. Say, your random self-hosted server thing that isn't escaping your LAN (e.g. Jellyfin or anything like this)
cmeacham98 1 month ago
All you need to start running a Docker container is a location and tag (or hash). To update, all you do is bump the tag (or hash). If a little more complicated setup is necessary (environment variables, volumes, ports, etc) - this can all be easily represented in common formats like Docker compose or Kubernetes manifests.
How do you start running a system-nspawn container? Well first, you bootstrap an entire OS, then deal with that OS's package manager to install the application. You have to manage updates with the package manager yourself (which likely aren't immutable). There's no easy declarative config - you'll probably end up writing a shell script or using a third party tool like Ansible.
There have been many container/chroot concepts in the past. Docker's idea was not novel, but they did building and distribution far better than any alternative when it first released, and it still holds up well today.
ranger207 1 month ago
placardloop 1 month ago
The logo of Docker is a ship with a bunch of shipping containers on it (the original logo was clearer, but the current logo still shows this). “Containers” has never been about “containment”, but about modularity and portability.
shykes 1 month ago
The two meanings - sandboxing and distribution - have coexisted ever since, sometimes causing misunderstandings and frustration.
globular-toast 1 month ago
This is as opposed to the "old world" where computers needed to be specifically provisioned for running said program (like having interpreters and libraries available etc.), which is like shipping prior to containers: ships were more specialised to carrying particular loads.
The analogy should not be extended to the ship moving and transporting stuff. That has nothing to do with it. The internet, URLs and tarballs have existed for decades.
guappa 1 month ago
They provided no sandboxing whatsoever.
ninkendo 1 month ago
Is it possible to do container escapes on occasion? Yes, but each of those is a bug in the Linux kernel that is assigned a CVE and fixed.
Running as non-root in the container is an additional layer of security but it’s not all-or-nothing: doing so doesn’t make you perfectly secure (privilege escalation bugs will continue to exist) and not doing so doesn’t constitute “nothing whatsoever”.
guappa 1 month ago
> Is it possible to do container escapes on occasion? Yes, but each of those is a bug in the Linux kernel that is assigned a CVE and fixed.
No bug, if you have permissions to run mknod it's an entirely by design escape that docker lets you do :)
I wasn't talking about kernel bugs, of course there have been a lot of those causing escapes. I am talking about the default configuration that does absolutely 0 sandboxing. And it's not a bug, it's as intended.
If you want to run as root and don't even touch capabilities… yeah it's root. 0 protection, the stuff in the container is running as root and can easily escape namespaces.
maple3142 1 month ago
With the command above it is still possible to attack network targets, but let's just ignore it here. I just wonder how is it possible to obtain code execution outside the namespace without using kernel bugs.
edoceo 1 month ago
mdaniel 1 month ago
Try harder, friend, those require granted capabilities
$ PAGER=cat man 7 capabilities | grep -C1 MKNOD
CAP_MKNOD (since Linux 2.4)
Create special files using mknod(2).
$ docker run --rm -it public.ecr.aws/docker/library/ubuntu:24.04 /usr/bin/mknod fred b 252 4
/usr/bin/mknod: fred: Operation not permitted
guappa 1 month ago
ninkendo 1 month ago
No, --privileged doesn’t count. No, --cap-add=<anything> doesn’t count. The claim here is that docker has “zero sandboxing” by default, so you’re going to need to show that you don’t need either of those. Not just moving the goalposts and saying you can break out if you use the command line flag that literally says “privileged”.
godelski 1 month ago
prmoustache 1 month ago
godelski 1 month ago
vaylian 1 month ago
I think most people simply don't know about it. A lot of people also don't know that there are alternatives to Docker.
I use both, systemd-nspawn and podman containers. They serve different purposes:
systemd-nspawn: Run a complete operation system in a container. Updates are applied in-place. The whole system is writeable. I manage this system myself. I also use the -M switch for the systemctl and journalctl commands on the host to peek into my nspawn-containers. I create the system with debootstrap.
podman: Run a stripped down operating system or just a contained executable with some supporting files. Most of the system is read-only with some writeable volumes mounted at well-defined locations in the file system tree. I don't manage the container image myself and I have activated auto-updates via the quadlet definition file. I create the container based on an image from a public container registry.
Both solutions have their place. systemd-nspawn is a good choice if you want to create a long-lived linux system with lots of components. podman/docker containers are a good choice if you want to containerize an application with standard requirements.
systemd-nspawn is good for pet containers. podman is good for cattle containers.
fuhsnn 1 month ago
wvh 1 month ago
magicalhippo 1 month ago
I tried reading your link but I'm none the wiser, so perhaps you could provide the docker-equivalent one-liner to start a Jellyfin instance using systemd-nspawn?
godelski 1 month ago
I'll admit, the documentation to really anything systemd kinda sucks but awareness can help change that
magicalhippo 1 month ago
You're asking why hasn't anyone made something like Docker but with systemd-nspawn as the runtime or "engine".
edit: Found this article[1], which tries to do just that. Still not as convenient as Docker, but doesn't look terrible either.
[1]: https://benjamintoll.com/2022/02/04/on-running-systemd-nspaw...
godelski 1 month ago
magicalhippo 1 month ago
I think that's a common mistake. I'm fairly highly technical compared to your average user, but I don't have that much higher tolerance to friction for stuff that's not my core concern.
Poor UX is definitely friction, and system administration is seldom my core concern. I'm fairly certain I'm not unique.
godelski 1 month ago
I more mean that technical people tend to be more willing to slug through a poor UX if the tool is technically better. I mean we are all programmers here, right? Programming is a terrible UX, but it is the best thing we got to accomplish the things we want. I'm saying that these people are often the first adopters, more willing to try new things. Of course, this doesn't describe every technical person, but the people willing to do these things are a subset of the technical group.
I definitely see UX as a point of friction and I do advocate for building good interfaces. I actually think it is integral to building things that are also performant and better from a purely technical perspective. I feel that as engineers/developers/researchers we are required to be a bit grumpy. Our goal is to improve things, to make new things, right? One of the greatest means of providing direction to that is being frustrated by existing things lol. Or as Linus recently said: "I'm just fixing potholes." If everything is alright then there's nothing to improve, so you gotta be a little grumpy. It's just about being the right kind of grumpy lol
jmholla 1 month ago
FROM scratch
COPY ./hello /root/
ENTRYPOINT ["./hello"]
> Here, our image contains 2 layers. The first layer comes from the base image, the alpine official docker image i.e. the root filesystem with all the standard shell tools that come along with an alpine distribution. Almost every instruction inside a Containerfile generates another layer. So in the Containerfile above, the COPY instruction creates the second layer which includes filesystem changes to the layer before it. The change here is “adding” a new file—the hello binary—to the existing filesystem i.e. the alpine root filesystem.prakashdanish 1 month ago
psnehanshu 1 month ago
mortar 1 month ago
m463 1 month ago
echo abc && echo $_
abc
abc
except it's used with wget... wget URL && tar -xvf $_
does this work? Shouldn't tar take a filename?hmm... also, it says there is an alpine layer with "FROM scratch"??
godelski 1 month ago
> echo 'Hello' 'world' 'my' 'name' 'is' 'godelski'
Hello world my name is godelski
> echo $_
godelski
> !:0 !:1 !:2 "I'm" "$_"
Hello world I'm godelski
The reference manual is here[0] and here's a more helpful list[1]One of my favorites is
> git diff some/file/ugh/hierarchy.cpp
> git add $_
## Alternatively, but this is more cumbersome (but more flexible)
!!:s^diff^add
So what is happening with wget is > wget https://dl-cdn.alpinelinux.org/alpine/v3.18/releases/x86_64/alpine-minirootfs-3.18.4-x86_64.tar.gz && tar -xvf $_
## Becomes
> wget https://dl-cdn.alpinelinux.org/alpine/v3.18/releases/x86_64/alpine-minirootfs-3.18.4-x86_64.tar.gz
> tar -xvf https://dl-cdn.alpinelinux.org/alpine/v3.18/releases/x86_64/alpine-minirootfs-3.18.4-x86_64.tar.gz
Which you are correct, doesn't work.It should actually be something like this
> wget https://dl-cdn.alpinelinux.org/alpine/v3.18/releases/x86_64/alpine-minirootfs-3.18.4-x86_64.tar.gz -O alpine.tar.gz && tar xzf $_
This would work as the last parameter is correct. I also added `z` to the tar and removed `-` because it isn't needed. Note that `v` often makes untaring files MUCH slower[0] https://www.gnu.org/software/bash/manual/html_node/Bash-Vari...
[1] https://www.gnu.org/software/bash/manual/html_node/Variable-...
ryencoke 1 month ago
> wget https://dl-cdn.alpinelinux.org/alpine/v3.18/releases/x86_64/alpine-minirootfs-3.18.4-x86_64.tar.gz && tar xzf ${_##*/}
[0] https://www.gnu.org/software/bash/manual/html_node/Shell-Par...MyOutfitIsVague 1 month ago
https://pubs.opengroup.org/onlinepubs/009604499/utilities/xc...
vendiddy 1 month ago
You have iTerm, Terminal, etc. But what do those do? Those are not the shells themselves right?
mdaniel 1 month ago
I tried to dig up the course but naturally things are wwaaaaaaay different now than back in my day. But OCW has something similar https://ocw.mit.edu/courses/6-828-operating-system-engineeri... and does ship the source files https://ocw.mit.edu/courses/6-828-operating-system-engineeri... although I have no idea why that's only present in a graduate level class
kritr 1 month ago
The terminal emulator receives keyboard input via your operating system, and passes it to the shell program via stdin.
The shell is responsible for prompting you and handling whatever you type. For example the “$ “ waits for next character from the terminal emulator until you hit newline.
The shell is responsible for parsing your input, executing any child programs “ls” for example, outputting their content to stdout, and prompting you again.
godelski 1 month ago
alkh 1 month ago
godelski 1 month ago
> thisFunctionFails && echo "Hello world" && echo "I SAID $_"
> thisFunctionSucceeds && echo "Hello world" && echo "I SAID $_"
Hello World
I SAID Hello World
The left function has to get evaluated before the next function. So it is still related to the previous command.fragmede 1 month ago
mdaniel 1 month ago
m463 1 month ago
FROM scratch
COPY ./hello /root/
ENTRYPOINT ["./hello"]
> Here, our image contains 2 layers. The first layer comes from the base image, the alpine official docker image i.e. the root filesystem with all the standard shell tools that come along with an alpine distribution.But I thought "FROM scratch" was an empty container, while "FROM alpine" is a container with alpine libs/executables.
otherwise using "FROM scratch" to populate for example an ubuntu image would pollute the container.
prakashdanish 1 month ago
[1] - https://github.com/danishprakash/danishpraka.sh/issues/30
DeathArrow 1 month ago
adminm 1 month ago
The thing is that because Docker started the craze, the word "container" without further context in the IT world has become to mean docker container.
prakashdanish 1 month ago