This post is intended to collect some thoughts about Docker volumes. It's going to cover topics that I've encountered while writing and editing, including challenges when working with bind mounts in development, issues to think about with permissions, and cool things you can do with named volumes as you get your setup ready for deployment. This post is not meant to be comprehensive; instead, it's meant to be a focused set of tips for getting the most out of your volumes.
Probably the most common ambiguity I've encountered while working with Docker volumes is in the use of the term itself. According to the Docker docs, you have two choices when it comes to mounting your data: volumes and bind mounts. So when we use the term volume, we are referring to this type of mount. Maybe this is a holdover from my previous career, but when I see the term 'volume' in some contexts, I have a bit of an Inigo Montoya moment:
This is especially important when thinking about people who may be new to Docker. For example, it's a common practice to have something like this as part of an
app dev service setup in a
#App Service app: ... volumes: - ./:/app ...
Sometimes, I see this introduced as a 'volume'. Docker Compose allows a 'volumes' configuration option when defining services, so the reference could be to that, but even then it's still necessary to specify the type of mount that the config option references. This particular mount is a bind mount, which mounts the contents of a specified directory on the host to a specified location on the Docker container.
For development setups, bind mounts can prove handy, since they allow you to make synchronous changes to your code on the container and host. If your project is in active development this can be quite useful.
FROM node:10 WORKDIR /app COPY package*.json ./ RUN npm install COPY . . EXPOSE 8080 CMD [ "node", "app.js"]
In addition, you have an
app service defined in your
docker-compose.yml file that looks like this:
#App Service app: ... volumes: - ./:/app ...
Well, the care that you extended in creating your
node_modules directory from scratch on the container will be for naught, since whatever is included in your
node_modules directory on the host will now copy to your container. Thanks for that, bind mount!
There are ways around this, of course, but it adds a layer of complexity that you should be aware of if you plan to use bind mounts in development. Any bind mount you define in your
docker-compose.yml file (or files) will override what you do in your Dockerfile.
Docker's default behavior is to mount volumes as root. It is worth keeping this behavior in mind as you think about things like running containers as a non-root user. For example, the official Node Docker image maintains that it is a good idea to run the container as a non-root user. This can pose challenges when working with volumes. It is important, for example, to set permissions in your Dockerfile that establish ownership any time there's an installation or copy. So, for example, the Dockerfile we looked at above might become something like the following if you want to use the Node image's available node user:
FROM node:10 RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app WORKDIR /home/node/app COPY package*.json ./ USER node RUN npm install COPY --chown=node:node . . EXPOSE 8080 CMD [ "node", "app.js" ]
Another potential pitfall when working with bind mounts is that Docker will create the directory you specify in the
docker-compose.yml file on the host if it doesn't already exist — as (you guessed it) root. If, for example, you wanted to run an installation process on a container and then mount the code on your host, you could end up with a messy situation if you are working as a non-root user. Consider using containers to mount your application code to the host instead, if you are looking to take advantage of a containerized workflow. The first step of this article, for example, describes how to use Docker's
composer image to mount the necessary dependencies for a Laravel project to the host.
If you are working on a deployment setup, then you will want to think about using named volumes instead of bind mounts. According to Docker, named volumes are the preferred mechanism you should use when sharing data between containers. Things to keep in mind:
- Volumes are empty on start — as will be any folders they are mounted to on the container.
- The contents of a volume are the result of service actions at runtime.
- The default driver for a created volume is the
localdriver on Linux systems accepts
For more on points two and three above, see Docker's documentation of the
docker volume create command.
This means that if you are looking for an efficient way to share data between containers in a deployment setup, named volumes are the way to go. You can use
mount options to populate them with code from your host. For example, if you had a project directory with your application code, and you wanted to share that code between two containers, you could create a named volume and use it in both service definitions:
#App Service app: ... volumes: app-code:/app webserver: ... volumes: app-code:/var/www/html ... volumes: app-code: driver: local driver_opts: type: none device: /home/your_user/your_project/ ...
app-code, would be populated at runtime with the contents of the project directory (here
your_project), which would then copy over to the containers. This prevents some of the ambiguity of using bind mounts, though you would still need to think about permissions, since these volumes would be mounted as root.
This post is an attempt to summarize some of the most relevant things I've run into when working with Docker volumes and running containers as a non-root user. Docker volumes are an extensive topic, and I hope to have an opportunity to return to them in the future.