Jan 08, 2018

Bind dockerd to a TCP port

On a modern system (one running Systemd) when installing Docker, the dockerd daemon is run using Systemd' socket activation. By default the socket is /var/run/docker.sock. If you want to connect to a remote machine over TCP, the obvious thing to do is to create /etc/docker/daemon.json and set the hosts list there. But that will conflict with the command line flags for socket activation. The correct way is to override Systemd' socket activation config. Here's how (all command are as root):

mkdir -p /etc/systemd/system/docker.socket.d
echo '[Socket]' > /etc/systemd/system/docker.socket.d/tcp.conf
echo 'ListenStream=2375' >> /etc/systemd/system/docker.socket.d/tcp.conf
systemctl daemon-reload
systemctl restart docker

Dec 06, 2017

Bundling a binary file into a shell script

When creating an auto-scaling group in EC2 I often try to package the deployment script into the user data. Installing some packaged software is easy to do but bundling configuration files that are needed is less straightforward. If the files are not confidential in any way, I either clone a Git repository or download a tarball from our static assets domain. But this leads to a dependency on external services and a slightly more complex deployment procedure. A few days ago I was faced with the same options again but it didn't sit right with me to do all this for a couple of files that are a few K's in size totally. I remembered that some software have installation scripts that bundle the binary blob inside the script.

First version

I searched and found an article in the Linux Journal that seemed to show what I wanted to (and seems to be copied everywhere). You could download a single file that was a shell script with the binary blob inside. Your usage will be close to this

wget http://hostname.tld/bundle
sh bundle

or this

wget http://hostname.tld/bundle
chmod +x bundle

Which is fine. However the code was a bit longer than it should have been and I felt it could be done better. A little more research and I found an answer in Stack Overflow that mentioned uuencode and uudecode. Reading the man page I saw it was closer to what I wanted. The code I wrote is available on my GitLab instance.

The implementation works as follows. The bundle has the script at the start of the file with the encoded binary at the end. The shell executes the script part (which ends with exit as to not continue any further, causing errors) and uudecode only starts processing after it sees the relevant header. The script feeds itself to uudecode (uudecode "$0") which decodes the binary and outputs it to disk which the script can then use. The code has both the build instruction in the Makefile and usage example in the bats tests.

Second version

However something kept nagging me. I wanted a simple invocation method like so:

curl http://hostname.tld/bundle | sh

And in the case of the user data in EC2, I could simply use the bundle. Otherwise I would need to host it somewhere and in the user data I would download and run the bundle. Which means that if the bundle was unavailable the instance would fail to provision.

Everything I found assumed that the file was present in the file system for uudecode to decode. If it was piped there was no file that uudecode could then decode. I kept mauling over it and a came up with a short, clean solution to this problem, which is available here, again with build instruction and test examples.

This time I used AWK to replace a single line in the script with the file, encoded using uuencode but this time in base64 (to keep the script valid without any characters with special meanings). That is piped to uudecode which decodes and saves it to disk. The script can then continue with the binary blob present.

This method is less space efficient and the build procedure is less obvious. But the ability to use resulting script as the user data (or piping the output from curl to sh) is worth it in my opinion.

Nov 26, 2017

Building inside a Docker container with the correct user

Lately I've been using Docker container as clean, easily portable and easily removable build environments. In those cases the image contains the needed build tools and the project is mounted to a volume inside the container. The artifacts are then built inside the container but are placed inside the volume. However a small problem arises, the artifacts (and whatever other files are created, like cache) are owned by the default user, root, making editing or removing said files less straightforward.

The trivial solution

The trivial solution is to run the container with the correct user id, like so

uid="$(id -u)"
gid="$(id -g)"
docker run -v "$PWD:/volume" --user "$uid:$gid" buildimage make

I personally find it a tiresome after the 3rd time I had to sudo chown the project because I forgot to specify the uid and gid and it's a (low) barrier of entry for new users.

A better solution

The solution I've come up with is this small script that sets the uid and gid values to those of the owner and group for the volume and then execute the commands.

set -eu
[ "$(id -u)" = "0" ] || { echo "Not running as root, continuing as the current user."; eval exec "$@"; }
command -v stat > /dev/null || { echo "Can't find stat, exiting."; exit 1; }
command -v gosu > /dev/null || { echo "Can't find gosu, exiting."; exit 1; }
uid="$(stat . -c '%u')"
gid="$(stat . -c '%g')"
eval exec gosu "$uid:$gid" "$@"

The script is also available for download. The only dependency is gosu. You can download and check it to your VCS and incorporate it into your Dockerfile, or download it via the ADD directive, like so:

FROM buildpack-deps
RUN curl -fsSL https://github.com/tianon/gosu/releases/download/1.10/gosu-amd64 -o gosu-amd64 && \
    install -o root -g root -m 755 gosu-amd64 /usr/local/bin/gosu && \
    rm gosu-amd64 && \
    curl -fsSL https://www.shore.co.il/blog/static/runas -o runas && \
    install -o root -g root -m 755 runas /entrypoint && \
    rm runas
ENTRYPOINT [ "/entrypoint" ]
VOLUME /volume
WORKDIR /volume
ENV HOME /volume

Setting the home directory to the mounted volume will result in some files (like package managers cache) to be created there, which you may or may not want. And then finally, to build run

docker run -v "$PWD:/volume" buildimage make