skip to content
Far World Labs

Mastering Docker: Running Containers as Non-Root Users

/

Many Docker images are designed so that their containers run with the root user by default. While this is a desirable thing when it comes to creating complex OS installations, running as root is one of the worst security practices in production environments.

The Center for Internet Security and OWASP are among the many organizations calling for developers to run their applications with non-root users inside of containers. Despite the critical security need, developers can be fraught with challenges, particularly when supporting host machines with varying operating systems and configurations.

In this post I’ll show you how I balance these tradeoffs.

Challenges of Running as Non-Root User

  • User/Group Mapping: One of the primary challenges is ensuring that the user and group IDs (UIDs and GIDs) inside the container match those on the host machine. This mapping is crucial for maintaining file permissions and avoiding access issues. Different operating systems can have different user and group configurations, making this mapping non-trivial.
  • Host-mounted Directories: Host-mounted directories are mounted as the root user inside the container when running Linux hosts. This means any directories used for inputs or outputs are inaccessible to the running application. If your container is using non-root users, there is no way to fix these permissions at runtime. Fortunately Mac users don’t have this problem.

Solution

The best way to solve this is to map the host user and group to the user created in the container and chown all directories with those user and group ids.

The approach I’ve come to like is where the entire image is built as root, and the chmod only occurs as a final step inside entrypoint.sh when starting the container. Gosu is a non-forking version of sudo which can then run the final Dockerfile command as our non-root user.

Step by Step

When adding a non-root user to my containers, these are the steps I follow:

  • Define UID and GID in .bashrc or .env, then pass those values through Docker Compose. Use Dockerfile ENV commands to expose those values across all stages of a Dockerfile. You can also use ARG for this, but those values are contained to the stage in which they’re defined.
  • Create the non-root user with those UID and GID values.
  • Run an entrypoint.sh which does the following:
    • Chown a list of static directories which should be owned by the non-root user
    • Chown any dynamically-named directories via an environment variable in docker compose. I recommend dynamic folder names as a way to avoid collisions when developing—I often add a “test” or “development” suffix to folders so they are isolated. Sometimes you can use dynamic names to conditionally create folders altogether. The docker-compose.yml is the earliest location default values can be defined if environment variables aren’t present. Those defaults are also present on the mounts themselves, so it makes sense to keep these definitions together.
    • Run the container application as our non-root user.

The code for this is straightforward.

docker-compose.yml
app:
environment:
- GOSU_USER=${UID:-1000}:${GID:-1000}
env_file:
- .env.${APP_ENV} # Contains UID and GID
...

 

Dockerfile
ENV UID=1000
ENV GID=1000
# No use of USER or chown inside the Dockerfile. Everything is built as root.
RUN apt-get update && apt-get install -y --no-install-recommends \
gosu
# ... other installs
RUN groupadd -g ${GID} node && \
useradd -u ${UID} -g node -m node
# ...

 

entrypoint.sh
#!/bin/bash
# Ensure the script exits if any command fails
set -e
change_ownership() {
local dir=$1
if [ -d "$dir" ]; then
chown -R node:node "$dir" || echo "Failed to change ownership of $dir"
fi
}
dirs=(
"/app"
# anything else
)
# Loop through the list and change ownership if the directory exists
for dir in "${dirs[@]}"; do
change_ownership "$dir"
done
# If GOSU_CHOWN environment variable set, recursively chown all specified directories
# to match the user:group set in GOSU_USER environment variable.
if [ -n "$GOSU_CHOWN" ]; then
for DIR in $GOSU_CHOWN
do
chown -R $GOSU_USER $DIR
done
fi
# If GOSU_USER environment variable set to something other than 0:0 (root:root),
# become user:group set within and exec command passed in args
if [ "$GOSU_USER" != "0:0" ]; then
exec gosu $GOSU_USER "$@"
fi
# If GOSU_USER was 0:0 exec command passed in args without gosu (assume already root)
exec "$@"

These GOSU-prefixed environment variables are a flexible way of changing permissions on dynamically named directories. Declare GOSU_CHOWN dirs as a space-separated list in docker-compose.yml for any dynamically named directories needing to be chowned as the application starts.

Obvious Approaches Aren’t Ideal

Non-root permissions can also be set by dropping to a non-root user via the USER command and performing build steps as non-root. I don’t recommend this approach, as I’ve found that this interleaving of root and non-root invocation is problematic, making it difficult to keep track of the state of directory permissions over time.

The root user can also perform COPY commands at build time, changing the permissions as files are copied from the host to the container. This approach works well for directories you aren’t planning to host-mount.

The main problem is that host-mounted directories aren’t mounted until a container is run. None of these build time permission changes apply. The only way to set permissions on host-mounted directories is with a runtime script.

For containers where it’s not feasible to create a non-root user, Docker also supports namespaces to map the container user ids such that breakout scenarios are more difficult. This approach is essential in many cases, but not as good as having a fully separate user who doesn’t have any access to OS internals beyond what’s needed for the application.

Consider Setting a Restrictive Umask

The umask (user mask, or file mode creation mask) determines the permissions for new files. By default, most distros use umask 022, which results in new files being readable by group and other (644 rw-r--r--) and new directories being navigable by other and fully owned by group. This might be overly permissive. Instead, consider setting umask 077 or 027 in your Dockerfile. The former locks down access to only the owner, and the latter restricts group to read only access. Applications rarely support multi-user scenarios at the OS level, so this may be a good practice for your organization. The main downside to this is it might conflict with the umask of your host user and the permissioning needs of your developer tooling. Keep in mind that if you’re using group permissions, it’s often useful to set the SGID bit so new files inside directories inherit the group ownership as well.

RUN umask 0077

Conclusion

We now have images able to use the full power and flexibility of the root user at build time, and we also have fully locked down non-root behavior at runtime. In many ways, this is classical system administration. Sadly, the insecure Docker defaults, poor affordances, and mysterious concepts lead many developers to create tangled Docker hairballs. With this approach it’s still possible to use Docker as a mostly transparent infrastructure abstraction without worrying about why your application has read/write access to all files in the entire OS its running on, or the potentially frustrating battle of having to dive deep into Docker just to share a directory!