Jenkins, Docker, Proxies, and Compose

Previously on…

If you’re just joining us, start out with this introductory post about our move to continuous delivery on League and how we found the tech stack to best solve our problems.

In our first tutorial, we started going through how to put Jenkins in a Docker container. In the follow-up, we learned how to use a Docker data volume container to create a persistence layer. We created a container that would preserve our Jenkins home directory so that plugins, jobs, and other Jenkins core data would persist between image rebuilds. We discussed the differences between using a data volume container versus just a volume mount from the host. Finally, we also learned how to move the Jenkins war file so it wasn’t in the Jenkins home directory and thus not persisted.

At the end of that post, we had a perfectly functional Jenkins image that could save data. I finished with a few reasons why it wasn’t ideal, and this post will address one in particular: the lack of a handy web proxy in front of Jenkins. But with that in place, we’re going to be running three containers to support our Jenkins environment and we're going to need to deal with that problem. So this blog will be a two parter, covering adding a proxy container and learning how to use Compose, a handy Docker utility for managing multi-container applications.

At the end of this post you'll have a full stack Jenkins Master server. We're not quite at what I personally would consider production readiness, but the remaining bits (creating your own Jenkins core image on your preferred OS) are more preference then technical requirement.

Part 1: Proxy Containers and You

We use NGINX as our proxy at Riot because it cleanly enforces things like redirects to HTTPS and masks Jenkins listening on port 8080 with a web server that listens on port 80. I won’t be covering setting up NGINX for use with SSL and HTTPS (ample good documentation can easily be found on the internet); instead, I’ll go over how to get NGINX up and running in a simple proxy container and properly proxying a Jenkins server.

Here’s what we’ll cover in this section:

  • Creating a simple NGINX container.
  • Learning how to add files from our local folder into our image builds, like the NGINX configurations we want to use.
  • Using Docker container links to allow for easy networking between NGINX and Jenkins.
  • Configuring NGINX to proxy to Jenkins.

You Get an OS and You Get an OS!

At Riot, we’re not regular Debian users; however, the Cloudbees Jenkins image uses Debian as its default OS, inherited from the Java 8 image. But one of the powerful advantages of Docker is that the OS can be what you want because the host doesn’t care! This is also a useful demonstration of “mixed mode” containers. The idea that if your application spans multiple containers, they don't all need to be the same OS. This has value if specific processes have better library or module support in specific Linux distributions. Whether or not you think running an app that has a Debian/Centos/Ubuntu spread is a good idea I leave up to you. This is just a demonstration of capability.

You’re free to modify this image into Ubuntu, Debian, or whatever flavor you want. I’m going to use Centos7, an OS more familiar to me. In part 4 of this blog I’ll be talking more specifically about changing the default Jenkins image into a different OS and removing its dependencies on external images. Keep in mind if you do change OS flavors, you will need to alter many commands/configurations to conform to how NGINX works in that OS environment.

Creating the NGINX Dockerfile

Let’s get started. In your project root folder, make a new directory called “jenkins-nginx” to store yet another Dockerfile. You should now have three directories (if you’ve been following all the posts):

  • jenkins-master
  • jenkins-data
  • jenkins-nginx

Inside the jenkins-nginx directory, open a new file called “Dockerfile” for edit in any editor of choice. Then, do the following:

  1. Set the OS base image you want to use:
    • FROM centos:centos7
      MAINTAINER yourname
  2. Use Yum to install NGINX:
    • RUN yum -y update; yum clean all
      RUN yum -y install http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm; yum -y makecache
      RUN yum -y install nginx-1.8.0

      Note that we lock the NGINX version to 1.8.0. This is just a best practice: always fix your versions to avoid rebuilds of your image moving to a version you haven’t tested.

  3. Cleanup some default NGINX configuration files we don’t need:
    • RUN rm /etc/nginx/conf.d/default.conf
      RUN rm /etc/nginx/conf.d/example_ssl.conf
  4. ​Go ahead and add our configuration files (we still need to make these):
    • COPY conf/jenkins.conf /etc/nginx/conf.d/jenkins.conf
      COPY conf/nginx.conf /etc/nginx/nginx.conf
    • This is the first time we’ve used the “COPY” command. There’s also the “ADD” command, which is a close cousin. For an exhaustive look at the difference between the commands, I recommend these two links:
    • For our purpose, COPY is the best choice here. As the articles suggest we’re copying individual files and don’t need the features of ADD (tarball extraction, URL based retrieval, etc). As you might predict, we’re going to have some updates to the default nginx.conf file and a specific site configuration for Jenkins.
  5. We want NGINX to listen on Port 80 so let’s make sure that port is exposed:
    • EXPOSE 80
  6. Finish up by making sure NGINX is started:
    • CMD ["nginx"]

Save the file—but don’t build it yet! Because we have those two COPY commands in there, we need to actually create the files we’re copying or the build will fail when it can’t find them.

Creating the NGINX Configuration

I’m going to provide the entire nginx.conf file here as an example, then go over the specific changes from the default nginx.conf file.

daemon off;
user  nginx;
worker_processes  2;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
    use epoll;
    accept_mutex off;
}

http {
    include       /etc/nginx/mime.types;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    client_max_body_size 300m;
    client_body_buffer_size 128k;

    gzip  on;
    gzip_http_version 1.0;
    gzip_comp_level 6;
    gzip_min_length 0;
    gzip_buffers 16 8k;
    gzip_proxied any;
    gzip_types text/plain text/css text/xml text/javascript application/xml application/xml+rss application/javascript application/json;
    gzip_disable "MSIE [1-6]\.";
    gzip_vary on;

    include /etc/nginx/conf.d/*.conf;
}

Let’s go over the changes from the default:

  1. To make NGINX not run as a daemon:
    • daemon off;

      We do this because by default calling “nginx” at the command line has NGINX run as a background daemon. That returns “exit 0” which causes Docker to think the process has stopped, and it shuts down the container. You’ll find this happens a lot with applications not designed to run in containers. Thankfully for NGINX this simple change solves the problem without a complex workaround.

  2. Upping the NGINX worker count to 2:
    • worker_processes 2;

      This is something I do with every NGINX I set up. You can leave this at 1 if you want. It’s really a “tune as you see fit” option. NGINX tuning is a topic for a post in its own right. I can’t tell you what’s right for you. Very roughly speaking, this is how many individual NGINX processes you have. The number of CPU’s you’ll allocate is a good guide. Hordes of NGINX specialists will say its more complicated than that. Certainly inside a Docker container you could debate what to do here.

  3. Event tuning:
    • use epoll;
      accept_mutex off;

      Turning epolling on is a handy tuning mechanism to use more efficient connection handling models. We turn off accept_mutex for speed, because we don’t mind the wasted resources at low connection request counts.

  4. Setting the proxy headers:
    • proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

      So this is the second setting (after turning daemon off) that’s a must-have for Jenkins proxying. This sets the headers so that Jenkins can interpret the requests properly, which helps eliminate some warnings about improperly set headers.

  5. Client sizes:
    • client_max_body_size 300m;
      client_body_buffer_size 128k;

      You may or may not need this. Admittedly, 300 MBs is a large body size. However, we have users that upload files to our Jenkins server—some of which are just HPI plugins, while others are actual files. We set this to help them out.

  6. GZIP on:
    • gzip on;
      gzip_http_version 1.0;
      gzip_comp_level 6;
      gzip_min_length 0;
      gzip_buffers 16 8k;
      gzip_proxied any;
      gzip_types text/plain text/css text/xml text/javascript application/xml application/xml+rss application/javascript application/json;
      gzip_disable "MSIE [1-6]\.";
      gzip_vary on;

      Here, we turn on gzip compression for speed.

And that’s it! Save this file, and make sure it’s in conf/nginx.conf where the Dockerfile expects it. The next step is to add the specific site configuration for Jenkins.

The Jenkins configuration for NGINX

Like the previous section, I’ll provide the entire conf file here and then walk through the settings that matter. You can find most of what you need at the Jenkins documentation site. I then tweaked the file because I found some parts unclear. You can see mine here:

server {
    listen       80;
    server_name  "";

    access_log off;

    location / {
        proxy_pass         http://jenkins-master:8080;

        proxy_set_header   Host             $host;
        proxy_set_header   X-Real-IP        $remote_addr;
        proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto http;
        proxy_max_temp_file_size 0;

        proxy_connect_timeout      150;
        proxy_send_timeout         100;
        proxy_read_timeout         100;

        proxy_buffer_size          8k;
        proxy_buffers              4 32k;
        proxy_busy_buffers_size    64k;
        proxy_temp_file_write_size 64k;	

    }

}

There’s only one setting that really matters to what we’re doing here, and that’s the proxy pass setting:

  • proxy_pass   http://jenkins-master:8080;

This expects a domain name of “jenkins-master” to exist, which will come from the magic of container linking (I’ll address this below). If you weren’t using container linking, this would have to reference the IP/hostname of wherever your Jenkins container was running.

Interestingly enough, you can’t set this to “localhost.” That’s because each Docker container is its own “localhost,” and it’d think you’re referring to the host of the NGINX container, which isn’t running Jenkins on port 8080. To avoid using container links, it’d have to point to the IP address of your Dockerhost (which should be your desktop/laptop where you’re working). While you know this information, try to imagine the challenge of figuring it out with a farm of Dockerhosts where your Jenkins container could get deployed to any of them. You’d have to write some automation to grab the IP, then edit the conf file. It can be done, but it's a hassle. Container linking makes this much easier for us!

Build the NGINX Image and link it to the Jenkins one

Now that we’ve created the NGINX and Jenkins configuration files, our NGINX image should build. Make sure you’re at the top level directory (above all your docker image folders).

  • docker build -t myjenkinsnginx jenkins-nginx/.

Now built, we can start it and link it to the jenkins-master image so the proxy setup works. First, let’s make sure your jenkins-data and jenkins-master containers are running.

  1. docker run --name=jenkins-data myjenkinsdata
    • If this returns an error that it currently exists, no worries. That’s a good thing, because it means we won’t overwrite the data you already have in there!
  2. docker stop jenkins-master
  3. docker rm jenkins-master
  4. docker run -p 8080:8080 -p 50000:50000 --name=jenkins-master --volumes-from=jenkins-data -d myjenkins

Now we can finally start the NGINX container and give it a link to jenkins-master:

  • docker run -p 80:80 --name=jenkins-nginx --link jenkins-master:jenkins-master -d myjenkinsnginx

Note the “--link” option. You can find great documentation on how this works on Docker’s website. The short version is this makes sure the domain name “jenkins-master” exists in the NGINX container, pointed at the internal Docker network IP of the jenkins-master container.

Note that means that the NGINX container must start /after/ the jenkins-master one. Which means if you shut down and restart the jenkins-master container, you must restart the NGINX one as well.

Testing that everything works is simple. Just point your browser to your docker-machine IP address and everything should work as normal!

If it doesn’t work, something may be blocking port 80 on your machine. (This can happen especially in OSX.) Make sure your firewalls are turned off, or at least accepting traffic on port 80. If for some reason you can’t clear port 80, stop and remove the jenkins-nginx container and re-run it, but use “-p 8000:80” instead to map port 8000 to the container's internal port 80 port. Then go to http://yourdockermachineip:8000 and see if that works instead.

Jenkins Image Cleanup

Now that we have NGINX listening on port 80, we don’t need the Jenkins image or container to expose port 8080. Let’s remove that exposure by removing the port option when we start the container. We’ll do one more shutdown and restart, remembering that we have to shutdown the NGINX container too, because it’s linked! It needs to re-link every time you restart.

docker stop jenkins-nginx
docker stop jenkins-master
docker rm jenkins-nginx
docker rm jenkins-master
docker run -p 50000:50000 --name=jenkins-master --volumes-from=jenkins-data -d myjenkins
docker run -p 80:80 --name=jenkins-nginx --link jenkins-master:jenkins-master -d myjenkinsnginx

Refresh your browser on http://yourdockermachineiphere.

Nice and clean, and now errant users can’t even reach Jenkins on port 8080. Instead, they must go through your NGINX proxy to get it.

We’ve learned how to setup a NGINX proxy and how to use Docker container linking to route two containers together, which would otherwise be somewhat awkward in the NGINX configuration settings. We’ve also learned that using a different container base OS in one of our containers has no impact on our multi-container app.

This is a good breaking point. As always, things are updated online in the Github tutorial. This session can be found here: https://github.com/maxfields2000/dockerjenkins_tutorial/tree/master/tutorial_04. You’ll note the makefile has been updated again to account for the NGINX container and preserves proper start ordering. 

Docker Compose and Jenkins

We're now running the ideal 3 container setup, with an NGINX proxy container, the Jenkins app container, and a data-volume container to house all of our Jenkins data. We've discovered that managing 3 containers that have a startup order and dependencies thanks to data-volumes and container linking is becoming a bit of a chore. So with this post we’ll tackle adding a handy tool called Compose to the mix.

This section covers the following subject:

  • Using Compose to manage a multi-container application

What is Compose

Compose started life as a tool called Fig. Docker defines it as “A tool designed for running complex applications with Docker.” You can find its full documentation here: https://docs.docker.com/compose/. Compose will handle building our images and maintaining responsibility around what to stop and start when the application is rerun.

Let’s say I want to take our three container app, rebuild the jenkins container, and rerun the app—perhaps to upgrade the Jenkins version. Here’s the list of commands I’d have to run:

docker stop jenkins-nginx
docker stop jenkins-master
docker rm jenkins-nginx
docker rm jenkins-master
docker build -t myjenkins jenkins-master/.
docker run --name=jenkins-master --volumes-from=jenkins-data -d myjenkins
docker run -p 80:80 --name=jenkins-nginx --link jenkins-master:jenkins-master -d myjenkinsnginx

With a properly configured Compose, that becomes:

docker-compose stop
docker-compose build
docker-compose up -d

This is similar in behavior to the simple little makefile I provide with most of these tutorials. The trade-off for using Compose is that you have to maintain yet another configuration file along with your Dockerfiles.

This section is provided on it’s own because setting up and using Docker-Compose is a personal choice. I recommend it as a method of self documenting startup dependencies and relationships that fits into the overall Docker ecosystem. If, however, you have a strong contingent of Windows-only developers, Compose is not right for you (see below).

Pre-Requirements

  • If you’re using Docker Toolbox on OSX, Compose is part of the default installation
  • If you aren’t using Docker Toolbox or are running on Linux, install Compose by following the directions here: https://docs.docker.com/compose/install/ 
  • OSX or Linux OS

Please note, Compose does not yet work on Windows with the Windows version of the Docker client. If you’re using Windows and Docker Toolbox this is the first time things really don’t work the same way. My recommendation is stick with using a makefile for now (I will always include one in these tutorials for this purpose). The Compose dev team is working on a compatible windows version but as of version 1.4 it’s not ready yet. Your mileage may vary.

Step 1: Setting Up Your Compose Config File

Compose uses a YAML configuration file which makes it pretty straightforward to read and understand. We need to add an entry for every image we want Compose to manage and give it the specifics.

  • In your project root directory, create a new file called: docker-compose.yml

You can use another file name, but by default Compose will look for this name.

Step 2: Jenkins Data Container

Edit your docker-compose.yml file in your editor of choice and add the following (it's yaml, so preserve the indentation!):

jenkinsdata:
 build: jenkins-data

What we did here was create an entry for a container and called it “jenkinsdata”. Compose doesn’t support special characters like “-” in the names. Then we added a directive, “build” and gave it a folder name within our project folder where our Dockerfile for that container is, in this case “jenkins-data.”

To see how this works, do the following:

  1. Save your file
  2. At your command line type: docker-compose build

Docker-Compose should find your jenkins-data folder and build your dockerfile there; it’ll look just like the results from “docker build jenkins-data/”. You’ll notice, however, that the name of your image is different.  Compose uses a naming convention for images that is “projectname_composecontainername”. By default, the project name is the parent folder. This may be inconvenient.

This naming standard is pretty important. It will be what your containers are named on a “production” dockerhost. Make sure your parent folder is named something reasonable, or use the “-p” override to have consistent image names.  You can even use the “-p” override to differentiate between prod and dev environments—I leave that for you to explore!

So let’s get to adding the remaining images!

Step 3: Jenkins Master Image

Resume editing your docker-compose.yml file and add the following:

jenkinsmaster:
  build: jenkins-master
  volumes_from:
    - jenkinsdata
  ports:
    - “50000:50000”

Like the Jenkins data image, we have an entry that names our container as well as a build directive indicating what directory to find the Dockerfile in. We’ve also added a “volumes_from” clause here that is equivalent to the “--volumes-from=” command line argument for Docker run.  But note how it uses the Compose name for the container, not the actual name. This is a handy feature of Compose that let’s us reference the names we give things for better readability. Compose is smart enough to put it all together when it builds the containers.

The other advantage is that Compose knows implicitly that jenkinsmaster now depends on jenkinsdata, and it starts them in the correct order. You can list them in whatever order you want in the Compose file. This is a nice advantage over having to memorize that order, or use a makefile/shell script that preserves it.

Lastly we have the “ports” directive, to handle the port mappings we want. In this case we want to make sure the Jenkins Master container is mapping port 50000 for the JNLP slave connections.

Step 4: Nginx Image

The final piece. Add the following:

jenkinsnginx:
  build: jenkins-nginx
  ports:
     - "80:80"
  links:
     - jenkinsmaster:jenkins-master

Like the other two entries, it has a name (jenkinsnginx) and a build directory. But here we added a “ports” directive, which is just like the “-p” option for Docker run.  We also have a “links” directive that behaves like the “--link” command line option.  Again note that this link is from the Compose name (jenkinsmaster) to the domain name we want inside our container. I left this internal name as “jenkins-master” so I didn’t have to update the NGINX configuration file that identifies the proxy host at all. I leave it as an exercise for the reader to make that consistent if its important enough to fix.

Step 5: Putting it all together

For reference the entire docker-compose.yml file should look like this:

jenkinsdata:
 build: jenkins-data
jenkinsmaster:
 build: jenkins-master
 volumes_from:
  - jenkinsdata
 ports:
  - "50000:50000"
jenkinsnginx:
 build: jenkins-nginx
 ports:
  - "80:80"
 links:
  - jenkinsmaster:jenkins-master

Now we just need to build the whole thing. First let’s make sure there are no traces of the former containers used in previous posts. If you’ve already cleaned up you can skip this set of steps.

At a command line:

docker stop jenkins-nginx
docker rm jenkins-nginx
docker stop jenkins-master
docker rm jenkins-master
docker rm jenkins-data

Note: we have to lose our data container to move to the new model. That kind of sucks. In future posts I’ll talk about how to backup this data, but if you really need to keep this data you can use the “docker cp” command I talked about in post #3 to back it up.

Now let’s build and run things with Compose!

  1. docker-compose build
  2. docker-compose up -d

That’s it! Note the “-d” so that Docker-Compose runs the containers as a daemon, just like the “-d” option for Docker run. You’ll note the output indicates the start order and names. If you want to see what's running, Docker-Compose has a handy feature for that too.

  • docker-compose ps

This gives a nicely formatted list of all the applications containers. Better yet it’s filtered to only the containers from your app, even if other things are running on the host. It is also smart enough to show you the data container which is technically a “stopped” container. Normally you’d need to use “docker ps -a” to see that. Very handy.

Step 6: Maintenance using Compose

Compose is smart enough to know about your data volume and preserve it. Try this:

  • Create a test job in your jenkins instance (http://yourdockermachineiphere)
  • docker-compose stop
  • Make a simple edit to your Jenkins master dockerfile, like changing your MAINTAINER name and save it.
  • docker-compose build
  • docker-compose up -d
  • Go back to your Jenkins instance and note your test job is still there.

Docker Compose just says “starting” your data container but it recreates your nginx and master containers because it knows it has to. Smart tools are awesome!

Compose also comes with a simple way to cleanup everything:

docker-compose rm

Note that it even asks you for confirmation. Please be aware, this will delete your data container too (it tells you this). So what if you don’t want clean up your data container? Easy.

docker-compose rm jenkinsmaster jenkinsgninx

If you try this, it will only remove them if they are stopped first (it won’t stop them for you). Note Compose can take a list of container names as referenced in the YAML file. This method works for all Compose commands like “build”,”stop” and “start.”

Conclusions

As always, all updates discussed in this post can be found at my git repository here: https://github.com/maxfields2000/dockerjenkins_tutorial/tree/master/tutorial_05

We learned that Compose can simplify our command management for starting, stopping, and building a multi-image application—all for the low price of one more configuration file. This file comes with the benefit of being self documenting in defining the relationships between the containers we are running.

Compose is a great tool that is clearly opinionated. I’d add to its potential list of drawbacks that it bases its built image names and running container names off the parent directory you’re in, and that it needs you to always specify a consistent project name. In return, Compose can manage a handy set of tools like PS and RM to make life easier.

We also learned that Compose doesn’t work on Windows yet, which is the first time we’ve run into a client tool in the ecosystem that isn’t quite ready for Windows. 

Whether or not you use Compose will depend on how much you need Windows support and if you like the Compose naming standards. I personally like the self documenting nature of the docker-compose.yml file and the simplification of the command structure. You’ll note I still provide a makefile with a simplified set of commands, as I also like a system that doesn’t even require me to remember the names of all the containers. That is entirely a personal choice.

At this point my basic tutorials are done! The next posts will be considerably more advanced.

What’s Next

We still have three big areas to cover in more advanced topics: backups, build slaves, and total ownership of your Docker images. I’m going to explore totally owning your Jenkins images to remove dependencies on public repositories first. Mainly because this can be a big deal in dependency management, or perhaps you’re not a fan of Debian based containers and would rather use Ubuntu or CentOS. It’s also a good primer for making your own Dockerfiles from scratch, something we’ll be doing when we get to build slaves as containers. The upcoming order will be:

  1. Taking ownership of your entire Image set
  2. Taking backups of this Jenkins/Docker setup easily
  3. Docker Containers as Build Slaves

See you next time!

Posted by Maxfield F Stewart