If you have ever had to start on a new team, you know the feeling of finding yourself a bit lost amidst dozens of tutorials and all the documentation required to set up your development environment. Odds are, you may have even skipped a step or two and found yourself in a mess. Even if not, there's a good chance the guide is at least somewhat outdated, or doesn't cover your exact scenario. Either way, there's bound to be some misunderstanding.
Thankfully, there's an excellent solution for your problem. By leveraging containers to create an isolated and automated environment, you can simplify matters for newcomers to the project, and help to build a reliable way to deliver your applications.
This post will explain:
- How to create a docker script to customize your environment;
- How to compose the dependencies;
- Sample commands to run in CI Server; and
- How to deploy your configuration in a staging/production environment.
First things first. You will need complete the following:
- Install Docker first.
- If you are behind a proxy, you will need to configure it.
- Two tips for Windows users:
- Install a bash terminal(or try the ubuntu bash on Windows);
- Do not convert the code to the Windows file system because we will run everything on docker, which is a Linux system:
git config --global core.autocrlf input
1. Development Environment
Let's keep our machines clean: Dockers create a container that will isolate the environment. If we need to delete it, we will not have any remaining orphan files. Let's run some code in Java with Gradle as an example:
$ cd /path/to/your/project
# --rm will drop the container after it stop
# -v origin:destine map the local directory to the docker struct
# -w target set a work directory
# -p host:docker map the host port with the docker port
# -d detached mode, try this later
# docker run <options> <docker image> <command....>
$ docker run --rm -v "$PWD":/app -w /app -p 8080:8080 gradle:alpine gradle bootRun
# To access, use http://localhost:8080
The command will create a docker container for you. You can do the same to run other languages and other tools. Below, I've included an example to create a Rails application from scratch using a docker container, showing that you don't need to have Ruby on your machine:
docker run --rm -v "$PWD":/app -w /app rails:latest rails new --skip-bundle webapp
docker run --rm -d -v "$PWD/webapp":/app -w /app -p 9292:9292 rails:latest /bin/bash -c "bundle install;rake db:migrate;rails s -b 0.0.0.0"
# To access, use http://localhost:9292
To bring it down, you simply need to exit (ctrl+c). If it is in the
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3f3b9bca6569 gradle:alpine "gradle bootRun" 43 minutes ago Up 43 minutes 0.0.0.0:8080->8080/tcp naughty_hawking
c90f601669c2 rails:latest "/bin/bash -c 'bun..." 6 minutes ago Up 6 minutes 0.0.0.0:9292->9292/tcp keen_torvalds
$ docker stop 3f3b9bca6569
To improve even further, we can move the command line to a script and simplify some basic configurations to be part of the default environment, including some customizations to prepare the environment:
/path/to/your/project/Dockerfile
# base docker image
FROM gradle:alpine
# optional: your name
MAINTAINER Alan Voiski<avoiski@avenuecode.com>
USER root
# Install any dependence here if needed, examples
# RUN apt-get install -y nodejs
WORKDIR /user/src/app
# Docker generates a snapshot for each line; It reuses the last unchanged
# baseline when you rebuild your docker image. It is a good practice to do
# a snapshot of your dependences first because it not changes frequently.
ENV GRADLE_USER_HOME /user/src/.gradle
ADD build.gradle /user/src/app
RUN gradle --refresh-dependencies compileJava compileTest
# Probably rebuild point, the content added here will change more than
# the build.gradle file
ADD . /user/src/app
CMD gradle bootRun
Then, to run the file:
# Build it first
$ docker build . -t gradle_example
# On dev environment, to keep the local code in sync
$ docker run --rm -v "$PWD":/app -p 8080:8080 gradle_example
# On staging environment
$ docker run --rm -d -p 8080 gradle_example
To recap what we achieved:
- Self-documentation: the code describes what your environment needs to have.
- Automation: with one command you can create a new environment.
- Sharing: you will be easily able to share this with your team.
- Consistency: everyone, including those working in production environments, will have the same environment. This surpasses the limited way of thinking where "works on my machine" is the basic requirement.
- Version control: you can track the file changes, which also tells a story about the container changes.
Practically speaking, this also eliminates the previous need for a booked tutorial that was never generic enough to cover all machines/users/problems/needs anyway, not to mention becoming quickly outdated.
2. Service Dependencies
Having an isolated environment is a good practice because it circumvents impacts between developers' transactions that could generate false-positive behavior, or even mess with a pre-condition from a test scenario. We can use a docker to run our database, for example, and then work, even offline:
docker run mongo:latest -v mongodb_data:/data/db -p 27017:27017
Of course, it's best if we can document this dependency:
We will need docker volumes to preserve the data in case you need to recreate the container.
# docker compose version
version: '2'
# Services list
services:
# Mongo database
mongo:
image: mongo:latest
volumes:
# This volume will be an external volume to keep the data even
# if you destroy and create again
- mongodb_data:/data/db
# Back end spring boot server
back:
image: spring-boot-example
build:
dockerfile: Dockerfile
context: .
volumes:
# This will keep the current path in sync with the
# container, for development purposes.
- .:/app
# Like on the mongo service, it will keep the gradle cache
- gradle_cache:/root/.gradle
links:
# Creates a link to the other docker, it means that this docker will
# understand that the host "mongo" is the ip of the mongo docker service
- mongo:mongodb
environment:
# Environment variable that tells the mongo db host
- DB_HOST=mongodb:27017
ports:
# Exposing the port
- 8080:8080
# This will force to restart the server if it goes down
restart: always
# Let's say that I have a front-end server, we create a docker for it
front:
image: acdc2-dev-front
build:
context: client/
dockerfile: ./client/Dockerfile
command: ng s --host 0.0.0.0 --proxy-config proxy.conf.json
volumes:
- ./client:/app
links:
- back
environment:
- NODE_ENV=development
- BACK_END=http://back:8080
ports:
- 4200:4200
restart: always
# We need to register both volumes data
volumes:
mongodb_data:
gradle_cache:
Now we just need to run this command:
docker-compose up
Once again, here's what we achieved:
- Isolation: we are the owner of the services, and we will not be impacted or affected by another environment.
- Pre-conditions: we have everything we need.
- Offline: we are not dependent upon external services.
- Stability: we will not suffer from foreign service cooldowns.
- Cleaner: we can reset the environment at any moment.
- Self-documentation: again, the manifest describe all the dependencies.
3. Working with Pipelines
So, we want to automate the delivery process and have better tools in hand? That's it. Forget complex scripts and crazy cookbooks. You can simplty run the command to add your docker on any server, and you're done.
But we can do even more than that. Let's say I want:
- To build:
docker-compose build
- To check the code:
# Back-end docker-compose run --rm back gradle sonarqube # Front-end docker-compose run --rm front ng lint
- To run the tests:
# Back-end docker-compose run --rm back gradle test # Front-end docker-compose run --rm front ng test
- To deploy in the staging environment:
(Note: this will deploy to another machine. You can skip the security if you want.)
# ip/host to the staging machine export DOCKER_HOST=tcp://docker.yourdomain.com:2376 # Config the security; you will need a certificate from target machine to allow your remote deploy # https://docs.docker.com/engine/security/https/ export DOCKER_TLS_VERIFY=1 export DOCKER_CERT_PATH=/home/someuser/.docker/docker.yourdomain.com # Run the docker compose in detached mode docker-compose up -d
- To execute the acceptance tests:
# Back-end docker-compose run --rm back gradle acceptanceTest # E2E, another composition using cucumber docker-compose -f docker-compose.cucumber.yml run --rm cucumber
Moreover, we can trigger the pipeline when we have the code on the master. Below is an example using Jenkins with GitHub.
Last time - here's what we achieved:
- Same code: you will use the same Docker manifest to make sure that all the constraints are green.
- Ease of automation: no need to handle huge amoutns of code to point to different service dependencies.
- Cleaner: no need to mess with the CI tool environment.
- Faster feedback: you can easily test the MR or the master push to make sure that it is stable.
CONCLUSION:
So, the main point here is that you will be able to document your environment by code. Your team doesn't need to know how it is running, but they can check the code if needed. Moreover, it will always be updated =)
The next step is to create a robust pipeline to keep your code on track. With the same provided environment, any issue will can be easily reproduced. In the next post on this topic, we'll talk in greater detail about how to create a reliable pipeline for continuous delivery.
Until then, if you're ready to go a little deeper you can read the