Building a Dockerized CLI That Can Write Files Locally
If the toolchain you are using makes it difficult to ship a CLI as a single executable binary, it can be useful to ship a docker image instead. You can create a script that runs the docker image and it can masquerade as the actual CLI. However, if your CLI outputs files, you will need a way of making sure the files are outputted to the host rather than inside the docker container, and preferably with the same user and group permissions as the shell running the CLI.
By default, docker will execute everything as the root user so your CLI may be outputting files that only the root user has access to. To fix this, there is quite a bit of set up you need to do.
When building a docker image, you can pass the current GID and UID as arguments to the build command:
docker build -t my-cli --build-arg UID=$(id -u) --build-arg GID=$(id -g) .
In your Dockerfile, you should create a new user and group using these IDs:
ARG UID
ARG GID
RUN groupadd --force --gid $GID dockeruser
RUN useradd --create-home --home-dir /home/dockeruser --shell /bin/bash --uid $UID --gid $GID dockeruser
The --force
in groupadd
ensures the command does not fail if the group
already exists. This can be possible since your docker image's /etc/group
may
come with pre-defined groups that overlap with the given group.
The --home-dir
flag for useradd
is important. Without it, --create-home
will automatically create the home dir at /home/dockeruser
, but depending on
where you run the script, the UID that is passed can be a
system user instead of normal user
so useradd
may not create the home directory. So it's better to just specify
the directory explicitly to cover your bases.
Now that we've created this new user, we can now switch to this user in your
Dockerfile by using the USER
command:
USER dockeruser
RUN whoami
# Outputs dockeruser
Gotchas when writing to WORKDIR
By default, WORKDIR
will create directories as the root user if the given
directory does not exist. This is problematic if the commands you plan to run as
dockeruser
writes files to the current WORKDIR
. To mitigate any permission
issues, you can run chown
directly after you run WORKDIR
:
WORKDIR /opt/dockeruser
RUN chown dockeruser:dockeruser $(pwd)
Gotchas when running COPY
When copying the CLI source locally to the docker image so that the CLI can be
built from the docker image, we want to make sure the copied files are owned by
dockeruser
. By default, the COPY
command copies everything as root, even if
you've added USER dockeruser
beforehand. So you have to use the --chown
flag:
COPY --chown=dockeruser ./package.json ./
Installing Global NPM Packages
Now that we've created a homedir in /home/dockeruser
, it becomes easier to
install global npm packages. If we tried installing a global npm package without
a homedir that dockeruser
can write to, you would run into all sorts of
permission and path issues.
Set the NPM prefix which is the path NPM will use to download and save global packages:
ENV NPM_CONFIG_PREFIX=/home/docker_user/.npm-global
Set the PATH to make sure any global binaries you may have installed can be
called from subsequent RUN
commands:
ENV PATH=$PATH:/home/dockeruser/.npm-global/bin
Global npm installations should now work as expected:
RUN npm --global install pnpm
Running the Docker Image
When running the CLI from the docker image, you can create a directory for
yourself like /opt/pwd
and then set the entrypoint to the CLI:
WORKDIR /opt/pwd
ENTRYPOINT ["my-cli"]
ENTRYPOINT
is used to make the docker image behave more like an executable. It
comes with advantages over using RUN
such as proper handling of unix signals
between the CLI and host.
One random gotcha that I've wasted hours on: the command my-cli
in the
ENTRYPOINT
expression cannot be surrounded by single quotes otherwise docker
will interpret the entire array expression as a single shell command and you
will get an error like:
['my-cli']: command not found
It is things like this that makes me question my career choices.
When running the docker image, you can mount the current local directory of
where you are running the docker image to /opt/pwd
so that if your CLI outputs
any files, it outputs it to the local current working directory instead of
inside the docker container.
docker run -v $PWD:/opt/pwd my-cli $@
The $@
expression ensures all arguments of this script are forwarded to the
command you defined in ENTRYPOINT
. You can have this docker run command in a
file called my-cli
and chmod
it and the user running this script doesn't
even have to know that they are running a docker container.
Summary
Putting it all together, your Dockerfile may look like this:
FROM alpine:latest
ARG UID
ARG GID
RUN groupadd --force --gid $GID dockeruser
RUN useradd --create-home --home-dir /home/dockeruser --shell /bin/bash --uid $UID --gid $GID dockeruser
WORKDIR /opt/dockeruser
RUN chown dockeruser:dockeruser $(pwd)
USER dockeruser
COPY --chown=dockeruser package*json .
COPY --chown=dockeruser tsconfig.json .
COPY --chown=dockeruser src/ src/
RUN npm install --frozen-lockfile
RUN npm build
ENV NPM_CONFIG_PREFIX=/home/dockeruser/.npm-global
RUN npm link --global
WORKDIR /opt/pwd
ENTRYPOINT ["my-cli"]
And your dockerized CLI may look like this:
#!/usr/bin/env bash
set -eu
docker build -t my-cli --build-arg UID=$(id -u) --build-arg GID=$(id -g) .
docker run --rm --network host -v $PWD/opt/pwd my-cli $@
Working with docker is painful and the only way I have found to master it is to accumulate battle scars and document the lessons for my future self.