All DevOps related posts:

    DOCKER HOME LAB | Affect container with Dockerfile

    In the last post we learned how to use Dockerfiles to automate the build process of making contain images.

    Now were going to see how we can use Dockerfiles to affect containers beyond the file system.


    DOCKER HOME LAB | Where to Start ?

    DOCKER HOME LAB | Managing Docker Containers

    DOCKER HOME LAB | Commit changes to a container

    DOCKER HOME LAB | Using / Finding / Sharing in Docker Public index

    DOCKER HOME LAB | Building Containers with Dockerfile

    DOCKER HOME LAB | Affect container with Dockerfile

    Affect container defaults with Dockerfile

    Docker containers are about running processes, but there are many properties of the environment that affect the processes including what the user is running as, environment variables and its access to the network.

    We can use Dockerfiles to specify this extra information about how a container processes should be run.

    First there are two instructions that affect what is run. When you run a container you can specify the command to run, but you don't have too if there is a default command on the container.

    The CMD instruction lets specify this.

    Back in our sshd server container project directory, we can open our Dockerfile

    vi Dockerfile

    And here we will add the CMD instruction to run the sshd daemon by default


    Now after we have built this Dockerfile we will be able to run an sshd container with no command

    First, we build the image

    docker build

    And now we run the container based on the new image

    docker run

    I will stop the container for now to continue adding more options

    docker rm

    The other instruction affecting what processes is run in a container, is ENTRYPOINT.


    ENTRYPOINT can be used to achieve the same effect as CMD, but using ENTRYPOINT you can't specify different commands without using a special flag in the execution of docker run, instead the command passed in docker run will be appended to the entry points command, making the container a little more locked into a particular command. But normally you should CMD instead of ENTRYPOINT.

    Now while we are at the Dockerfile lets add some instructions that would affect the environment of the processes

    First, I’m going to set a user CMD to set the run as. This is significant because by default they run as root

    Here we will run as the nobody user

    set user nobody

    Next will set the working directory /tmp

    set working directory

    Lastly we are going to set an environment variable, just to see how we can do it

    set environment variables

    Now let’s run a few commands in this container to see the effect of our new instructions

    Let’s rebuild the image and review the details.

    docker build

    And let’s run a container with this command:

    docker run --rm -it galvezjavier/sshd id

    First the user ID we can see is not root, it’s the nobody user

    show user id

    Run this command: docker run --rm -it galvezjavier/sshd pwd to show the current working directory

    When running a command we can see is /tmp

    show working directory


    Lastly if we show the environment variables inside the container. We can see our foobar Hello value

    environment variables

    There is one more instruction to cover here, which is the EXPOSE instruction.

    EXPOSE tells docker that this container is going to be running a demon and will be listening on a particular port. Since container run an isolated network in order for container process to accept connections Docker must forward a port. We have been doing this with the -p argument on the docker run command. We always have to do this but is useful to know that the container will be listening on port, so we can provide that Meta data using EXPOSE

    We will EXPOSE 2222 since that will be set at the sshd config file that we were using previously


    And re build the image one more time

    docker build

    The internal port is not that important since only ever access it via port specified when using docker run.

    We will run the continer now even if we don’t use –p to open a port, we can see with docker ps that this container is exposing a TCP port 2222

    docker ps

    To summarize we can use Dockerfile instructions to affect how users ultimately use the container, including the command they run and how it runs.

    In the next post we take a look ways in which we can build upon and reuse existing Docker containers.

    Docker Home Lab

    Docker Home Lab

    Click here to see the inital post

    DOCKER HOME LAB | Building containers with Dockerfile

    In the previous post we looked at managing, finding and working with containers.

    In this post we will be learning how to programmatically make or build container images using dockerfile.


    DOCKER HOME LAB | Where to Start ?

    DOCKER HOME LAB | Managing Docker Containers

    DOCKER HOME LAB | Commit changes to a container

    DOCKER HOME LAB | Using / Finding / Sharing in Docker Public index

    DOCKER HOME LAB | Building Containers with Dockerfile

    DOCKER HOME LAB | Affect container with Dockerfile


    Manually or interactively making Docker containers can be a pain, especially if you need to do it often or in multiple places. Docker addresses this by providing "dockerfile" that lets you define how to make a Docker container. Docker can then build the container from the instructions in that file in the same way no matter what environment.

    Let’s try this with our sshd server container from the previous example.

    When we make a dockerfile its best to create it inside a directory, this is often the product directory for the container. It's important to do this because of various conventions in the Docker ecosystem.

    So for start let’s create a directory for our new Docker image project


    Inside we will make a file called dockerfile  ( you can name anything,  but it make more sense to name its dockerfile, so people know what it is, and to follow conventions used by other tooling )

    create Dockerfile

    We will edit the dockerfile with vim. The format of the dockefile is simple.

    It's simply list of instructions like a script. Each instruction has a key word in all capital letters. The first we will use is FROM

    FROM always goes on the top of the file and we use it to specify the base image we want to use for this container. In our example we will use Ubuntu as we have been

    Dockerfile FROM

    Under FROM its best practice to specify an author or owner of this container using the MAINTAINER instruction, and it takes a name and e-mail address in standard email format

    Dockerfile MAINTAINER

    Now the interesting instructions. The main instruction is RUN. This is the equivalent of us loading bash and interactive running a command. Here I'm running two commands in one. I do this to group significant operations into the same layer. Each instruction will also create a new layer in the image.

    RUN apt-get update && apt-get install -y openssh-server

    Dockerfile RUN

    There are many other instructions but we’ll explore them in subsequent posts.

    Now we can build this dockerfile using the docker build command, it will take the path or URL of the project directory containing a dockerfile, hence the importance of name it dockerfle and having it in a specific directory.

    docker build

    When the docker build finish it will have produced an image, but it won’t have a name, just an ID.

    docker build

    I you have noticed by now, for this execution,  I have set in the command the –t option in the docker build, this flag  is to specify an image repository name to use for the resulting image, passed in the path to the dockerfile project, in this case the current directory.

    docker build -t galvezjavier/javier-ssh .

    As you can see now we have an image that has the sshd server package installed, and all build from a dockerfile descriptor file

    inspect the new container

    Dockerfile makes an already quick process even quicker, and introduce the concept of source files for containers, we will expand on this concept in next posts.

    Now we will see the use of the ADD instruction to add files to our contain image.

    When making container images you may want to include or add configurations files. Since is a best practice to version these files we want to keep them in the source for our container project, but how do we get them into the container? .. We simply use the ADD instruction in dockerfile.

    So lets go  back to our sshd server container project directory where we have our dockerfile.

    We are going to keep the sshd configuration file in this project and add it to the container we built for some customization.

    get file from container

    Let’s get a default configuration file to start. We can get it from our container, by simply running cat and piping into a file:

    docker run --rm -it galvezjavier/javier-ssh cat /etc/ssh/sshd_config > sshd_config

    cat sshd_config

    As you can see, now we have a copy of the default sshd configuration file down to our project directory.

    edit the config file with custom parameter

    For this demo let’s just edit and change the port from 22 to 2222

    Dockerfile ADD

    Now that we have a custom configuration file that we want to include in our container image.  We will use the ADD instruction in dockerfile to add the new file.

    The first argument of ADD is the source path which is relative to the project directory at build time

    The second argument is the full path to the destination. It sort of the copy command, except it does have some interesting characteristics, for example the source could be an URL which could be downloaded and placed at the destination argument. You want to check the docs for advanced semantics, but often you will use it in this way

    That’s all. Now we can rebuild the image using docker build as we did last time. The command will use the dockerfile with the ADD command and create a new image and copy the sshd-config file to the new container.

    docker build

    Then we will just go into new container and inspect the sshd configuration file

    verify the custom parameter in the container

    There is our port change. So this is the file from our project directory!

    As you can see now, it is very easily to include file from your project directory into your container image. This is perfect for configuration files or other artifacts. Remember you can add an entire directory or even downloaded files over http.

    In the next post we will see how we can use dockerfile instructions to affect users that use the container.

    Docker Home Lab

    Docker Home Lab

    Click here to see the inital post

    DOCKER HOME LAB | Using Finding Sharing Docker containers

    In the previous post we saw how to create our own containers images. In this post I will show you how to share them with the world which also makes it convenient for you to access them from other host.


    DOCKER HOME LAB | Where to Start ?

    DOCKER HOME LAB | Managing Docker Containers

    DOCKER HOME LAB | Commit changes to a container

    DOCKER HOME LAB | Using / Finding / Sharing in Docker Public index

    DOCKER HOME LAB | Building Containers with Dockerfile

    DOCKER HOME LAB | Affect container with Dockerfile

    This are the new commands we will be using in this post:

    docker images

    docker login

    docker push

    docker pull

    docker rmi

    Docker Registries are a services to store Docker images in a way that is layer aware and efficient

    There is a free pubic registry run by the company behind Docker called the Docker hub. We can upload, download images from this index as an easy way to move containers of our own

    Docker website

    We need to register an account in the Docker website before we can upload anything to the hub. Visit and click on the signup link and login.

    Docker Website Login

    Now we can authenticate on the command line with docker login and provide the credentials you just setup

    docker login command line

    Remember we can run docker images command to see what images we have on this host. We previously made an sshd container image named galvezjavier/sshd

    docker image

    So let’s put that online

    For our example, my image is named galvezjavier/sshd , which it begging with my docker user name and this is the convention you need to follow to push images to the docker index

    Now we can push the image using the command docker push

    docker push

    As you can see is only uploading layers that it doesn’t know about, since we build it with an Ubuntu image from this repository, is not uploading any of that, just the changes we made.

    Now if we visit the Docker index website, we can see the uploaded container image.

    Docker Hub

    We should test this image to make sure it works. First we will have to delete the image from our own host

    We can do this with docker rmi command, not to be confuse with docker rm command, this new command is for removing images. Make sure you don’t have any container running with this image first

    docker rmi

    Now let’s pull this image from the index

    Normally we don’t need to run docker pull, because if we try to run a container from an image that is in our host,  it will automatically pull the image from the index, however it can be useful to do this step upfront

    docker pull galvezjavier/sshd

    Now we have our image back from the index. Let's test it.

    We still have to specify which port we want to use to reach the internal port 22, but you can see we don’t need to specify the command since we configured it with the default

    docker run

    The image works fine. We can pull the image and run it again.  Now the image is public and anybody else can use it!

    There are other ways to move container of your own if you don’t want them to make it public, but we will save that for another post.

    Docker index: Find and use third-party containers.

    Now let’s see how we can use the Docker index to find and use third-party containers.

    Part of the beauty of Docker is that it make shipping of software very easy

    A Docker container has all it all system dependencies inside it , and you run it on any host system with Docker , this make it ideal for sharing preconfigured open source software

    I will walkthrough finding and using existing dockerized software, and in this case we will use a redis database

    We want a redis container to run, so where do we start?

    Actually there is a docker search command that will search against the public index

    docker search

    However I find this is not a great tool to find something new to start with if I picked any of the search results. I would know how to use them because there is no docs. That’s why I always start with the Docker help website

    Docker Hub Index search

    Here we get what looks like similar if not the exact same results, but we can learn much more about them, unfortunately many of them have no descriptions or useful information, the ones that will however are marked as official, and these are images that will build automatically from the linked git hub repo and ensures that you can see the source of the container image which often also includes documentation

    The first in the list is an official build, and we can see that have a good description and instructions

    Official images example

    Image with instructions

    What’s more, is we can click on a link to the git hub repository!

    git hub link from image

    Now that we know what we want to pull from the hub, we can just pull it down.

    instructions to pull image

    docker pull redis

    After some quick reading on the instructions,  it looks like the running container will run the redis command by default and listen on port 6379. So let’s run a container with that port open. We will also give it a friendly name of redis.

    docker run --name jav-redis -d redis

    Now let’s look on the logs to see if everything is fine

    docker logs

    It looks like we can use this container image to run the client, so we don’t even need to install a redis client to test our redis server

    This command is using some advance docker run features including linking and entry point overwriting.

    I will explore this in a future post.  For now just run it exactly as I do here

    docker run --rm -i -t --entrypoint="bash" --link jav-redis:redis redis -c 'redis-cli –h $REDIS_PORT_6379_TCP_ADDR'

    docke run | redis client

    We are in a redis shell, we can set a value and get to see if is actually talking with a server

    test and verification of redis container


    Although you may need to do some digging it make sure you know what are you getting, finding dockerized open source software is quite fun. It is also valuable in seeing how others containerized various projects, which we will be doing in the next post. And this is it for now. Keep tuned@

    Docker Home Lab

    Docker Home Lab

    Click here to see the inital post

    Docker Home Lab | Commit changes to a Docker container

    Commit changes to Docker containers

    In our previous post we learned how to start and manage containers. You may have noticed every time that we run a container it is started from the same point or image, none of our changes to the file system stayed with that image.


    DOCKER HOME LAB | Where to Start ?

    DOCKER HOME LAB | Managing Docker Containers

    DOCKER HOME LAB | Commit changes to a container

    DOCKER HOME LAB | Using / Finding / Sharing in Docker Public index

    DOCKER HOME LAB | Building Containers with Dockerfile

    DOCKER HOME LAB | Affect container with Dockerfile

    In this post we are going to use these new commands to learn how to make changes to a container image.

    docker images

    docker run

    docker diff

    docker commit

    Often you may want to work with containers that already set up the way you want instead of setting them up from scratch every time. Perhaps you want to have it configured in certain way or certain applications installed.

    In this post we are going to create an image made to run a ssh server. To do this we will make a custom container image based on the stock Ubuntu image.

    Let’s start by seeing what images we have with docker images

    docker images

    Whenever you try to run a container based on an image you don't have, its downloaded automatically.

    You may see a few since we been using the Ubuntu image you should see Ubuntu listed here

    To make a new image we start by running a container to make the changes we want, using bash is the easiest way to make changes interactively.

    Let’s start a container running bash

    docker run -i -t ubuntu bash

    Let’s check to ensure that ssh server hasn’t been install yet


    Before we install anything, lets update our apt-get repository, this will ensure we are working with the latest packages.

    apt-get update

    Let’s install the open ssh server package with apt-get install

    apt-get install openssh-server

    We used the full path because sshd is particular about how we started

    full path

    Looks like it need a directory created, when we run it in our previous post we have to create this directory in the same compound command because the container didn’t have it. Making it now will ensure is there in the image we made.


    If you recall we also have to set a password for the root user, since we will be connecting as root and ssh will ask for a password, let’s set that here

    set config

    We also need to update the sshd config file to allow root to login and another small change to allow authentication to work

    set login config root

    That’s all the changes we will make so let’s exit


    The container is now stopped but it's still around with its changes, we can inspect the changes to the file system using docker diff this show us exactly what files were changed after installing openssh server and running those commands inside the container. Let’s run this through less so we can look through its more easily

    docker diff

    You can see it install a number of files all over the place,  this is normally what happens when you install software and is kind of nice to Docker allow us to see this easily

    To persist our changes to an image we use docker commit with the container id and an image name , the common convention is to use the unique username/image name.

    docker commit

    Now let’s start sshd in a container from this new image, remember that we need to open port 22 in order to be able to connect to it

    docker run

    The container now is created, let’s connect to it

    ssh to container


    Lastly when you commit a container to an image, it also remembers the last run parameter that you used by default. If we commit again with this last container the next time we would not need to specify the sshd command or the need for port 22, since we will have already learned that from this container

    Now we can clean up the container

    docker rm

    As you can see this workflow is very simple, making very easy to create new containers images you can do it over and over again to customize or fork container images however you like.

    In the next post we take a look for ways to share your changed containers with other hosts and the rest of the world

    Docker Home Lab

    Docker Home Lab

    Click here to see the inital post

    Docker Home Lab | Managing Containers

    Managing Docker Containers

    In the previous post we saw a number of ways to create and starts containers. Once you started them, there are various command you have to manage them. You were already introduced to a few, but let’s look to some more of them


    DOCKER HOME LAB | Where to Start ?

    DOCKER HOME LAB | Managing Docker Containers

    DOCKER HOME LAB | Commit changes to a container

    DOCKER HOME LAB | Using / Finding / Sharing in Docker Public index

    DOCKER HOME LAB | Building Containers with Dockerfile

    DOCKER HOME LAB | Affect container with Dockerfile

    In this post I will introduce this commands:

    docker ps

    docker inspect

    docker logs

    docker stop

    docker kill

    docker rm

    Don’t be intimidated most of these are very simple commands.

    Managing containers is mostly a matter of managing processes, but in a way that is container aware.

    Docker provides a suite of utilities to manage container processes in much the same way you would with regular processes.

    We are going to see how you can inspect containers, see the Logs, stop them and ultimately removes them.

    Let’s begin by starting a couple of containers that will sticker around by doing pointless work. Ping will work well

    docker run -d ubuntu ping

    docker run -d ubuntu

    We can see all the running containers with docker ps

    docker ps

    This shows relevant information about the containers, the most useful which is the container ID; which we can use with other commands.

    However you might also notice the names generated for each container

    You can also use this human friendly name to refer to container and you can even set this family name the using the --name flag with docker run

    docker ps

    One frequently used flag of docker ps is the –a which shows all containers, including containers that has stopped

    docker ps -a

    We can use the IDs listed to get more information about the container running or not

    For example we can take any container and inspect it with docker inspect. This give us a Jason data structure of the state of that container. Take a look and see what kind of information is associated with the container

    docker inspect

    Again we can also inspect to container by his name like this:

    docker inspect

    Let’s dig deeper into these processes. We can see the output of any container running or not using docker logs

    If it is a running container we can follow the logs live as if we were using tail –f flag

    docker logs

    Docker logs let you see process output, but if you want to interact with the detach container process you want to attach it using docker attach. This is a more advanced command so I let you to explore it at your own, however be careful if you don't use it correctly you can lock up your terminal.

    Checking and copying the container ID can be a bit annoying, names make it a little easier, but here is a quick trick to work with last container you made

    Let’s make a detach container

    docker run

    This will return the full container ID, but we don’t even need to bother with that because we can run a variant of docker ps to get the ID of the last container we made.

    Just use the –l flag to only show the last container and –q to only show the id of the containers, we can use this as a subcommand in for example getting logs

    docker ps

    When your container processes don’t end by themselves you can stop them with docker stop

    docker stop

    By default it would try to stop with a sig term, but then after a timeout kill it with sig kill

    I’m using a more forceful so I often skip docker stop and use docker kill.

    Now if we look at our docker ps we don’t have any running containers


    But if we run docker ps –a, we can see they are still around. They are not running but their filesystem and metadata exist.

    docker ps -a

    This can be useful for inspecting logs after you stop the process

    If you don’t run commands with the –rm option or run a lot of detach processes you end up with a lot of stopped containers sitting around. You can delete them individually with docker –rm

    docker rm

    But if ever you want to just clear all out, including running containers we can use docker ps -aq which list all containers ids with –xargs to call docker rm –f for each id like this:

    docker ps -aq | xargs docker rm -f

    Now we have no running or stop containers listed

    docker ps -a

    To wrap-up we have learned all the basic commands for working with the process aspect of containers

    We can now start, list, stop, inspect, follow logs, and remove your containers

    We will continue to explore in the next post how to commit changes to a container image. Stay tuned for the next post!!

    Docker Home Lab

    Docker Home Lab

    Click here to see the inital post

    Infrastructure as Code! or Composable Infrastructure

    Infrastructure as Code!

    In this initial post about composable infrastructure, I will set the concept around  "infrastructure as code", and in a subsequent post, we will implement a basic example revisiting our DevOps sections to create a compossable infrastructure utilizing Chef and Docker

    so... what is Composable Infrastrucure ?

    Is all about speed, agility, and time to value.

    When you think in a workload centric approach and the challenges that companies have today on how to deliver services quickly. Companies have mixed workloads, and the cloud have opened the door to many people to do things differently, and many companies also want to maintain the control, because of the data and security compliance.

    Composable infrastructure enable companies to implement cloud on premise, still in the hybrid environment with a platform that give you flexibility. Basically composable infrastructure is a fluid pool of resources that a developer or a VP of operations can compose exactly to the need of the application and reduce waste, reduce overprovisioning, lower capex and improve utilization and maintained in a simple way.

    Composable infrastructure is a total leap frog in technology, a total way of of think about it.

    Traditional infrastructure is defined as servers, storage, and networking switches. Data centers was built on traditional infrastructure, as that’s where the industry started. The challenge for Traditional IT infrastructure is because compute, storage and network run on different platforms, they create many physical islands of highly underutilized resources. The management tools don’t cross those divides, so they also create silos of management, which make it extremely difficult for server, storage and networking admins to work efficiently.


    Converge Infrastructure was nothing more than putting together compute, storage and networking in a prepackage way, optimized for a specific workload like VDI, and that delivered an evolutionary leap from the traditional IT.

    While converged infrastructure does have some benefits, it achieves them at the expense of creating a management silos surrounding it – in this case, though, it’s a silo that’s created around workloads rather than hardware. Furthermore, the management of the servers, storage, and networking is often still done largely in islands, even when the equipment physically integrates them.


    The next evolutionary infrastructure was Hyper-converged systems and this bring together compute, storage and networking together into a single solutions that is easy-to-consume, that supports virtual workloads that do not require connectivity to SANs. As a result, hyper-convergence creates a management silo around these systems.


    While both converged and hyper-converged approaches have merit, they fall short of the ultimate goal: a single platform with a single operational model for all workloads.

    The challenges

    The challenges that the companies are facing today are how to bridge the old to the new style of business, how to modernize the core infrastructure, and how to deploy an infrastructure that is cloud ready, how to make the live of developers much easier and ultimately make a radically step forward in a way that infrastructure should be deployed, provisioned and managed.

    This is not evolutionary, this is totally revolutionary! 

    Think about it. You have traditional IT, converge, and hyper-converge for specific set of use cases. We think that the enterprises and companies need to become composable, because ultimately all the CIOs have to become services brokers, and for that you have to remove all the complexity from the service broker and be able to give them a control point where they can compose their infrastructure to the specific needs, and that’s totally a different way to think about infrastructure.

    Composable Infrastructure

    A composable infrastructure supports physical, virtualized and containerize workloads, fully integrated life cycle management, fully integrated provisioning, now you have a complete plain of glass to control everything. Think of it as infrastructure as code!!

    Infrastructure as Code

    Infrastructure as Code

    Click here to see the inital post

    Docker Home Lab!

    Where to star with Docker ?

    The idea of this series of post, is to facilitate the first steps to start with Docker, from build a vagrant VM where we will install docker server, run a simple container and slowly move to more advance topics. Sit Back, open your terminal, Relax and Enjoy the Flight...



    DOCKER HOME LAB | Where to Start ?

    DOCKER HOME LAB | Managing Docker Containers

    DOCKER HOME LAB | Commit changes to a container

    DOCKER HOME LAB | Using / Finding / Sharing in Docker Public index

    DOCKER HOME LAB | Building Containers with Dockerfile

    DOCKER HOME LAB | Affect container with Dockerfile

    Setting up Docker inside a VM


    For ease of use, I will use Vagrant and VirtualBox to build the environment to explore Docker in our home lab!

    Once both software are installed in your laptop, create a directory where the VM files and project will be stored. In my case, I have created a docker folder inside my home directory.

    Once the directory is created, run the command vagrant init inside the new folder as shown here:

    vagrant init ubuntu/trusty64 - we will get an Ubuntu 14.04 image


    Now run the command vagrant up to download the image and start the VM


    This process will take some time depending on your internet connection

    We can test the new VM, connecting via ssh with the vagrant ssh command


    Now that we are inside the VM, let’s update the OS


    Update package information with apt-get update


    Ensure that APT works with the https method, and that CA certificates are installed.


    Add GPG Key


    Create a file for Docker repository source

    vi /etc/apt/sources.list.d/docker.list

    Set the following source for Ubuntu Trusty 14.04:

    deb ubuntu-trusty main

    Now update the apt package index with apt-get update

    Purge old repo with the following command: sudo apt-get purge lxc-docker

    Verify apt is pulling from the right repository with the following command

    sudo apt-cache policy docker-engine

    and now run apt-get upgrade


    Update package manager with apt-get update and install the recommended package


    If applmor is not installed, Install appmor with the following command


    Execute the update package manager and install the following required packages


    After this process is completed,reboot the VM.

    A picture is worth  1000 words, but a video says it all. Here a video with all the steps to prepare our vagrant VM.


    Now we are ready to install Docker. Please note that I will follow the instructions from the official documentation:

    Connect to the VM, update the package manager and install Docker with the following command:


    Start the Docker daemon


    Verify Docker is installed correctly with the following command: sudo docker run hello-world


    Create a Docker group


    Logout and log back to the VM with vagrant ssh

    Verify you can run Docker without sudo


    To adjust memory and swap accounting edit the following file:

    sudo vi /etc/default/grub


    To GRUB_CMDLINE_LINUX= "cgroup_enable=memory swapaccount=1"


    Save and close the file and update GRUB


    Reboot the VM

    Enable UFW forwarding if UFW is disabled


    Configure DNS server for use by Docker

    sudo vi /etc/default/docker

    Add the following setting for Docker.

    DOCKER_OPTS="--dns --dns"


    Save and close the file and restart Docker daemon


    To visualize the version of Docker run the following command

    docker version


    The Docker installation now is completed.

    Exit and Stop the VM with vagrant halt command


    To start the VM run vagrant up inside the Docker folder.


    Docker Basic commands

    Now we have the base VM with Docker ready to play.

    Let’s create a bash shell inside a container with the following command:

    docker run -t -i ubuntu /bin/bash


    In less than a second I have a bash shell running inside a container!!

    Docker Objects

    Images: Like VMs contain an image of templates for containers, they consist mostly of a filesystem, but also metadata about the container, including how it should operate by default. If you want to move or share a container you do so by making an image or a snapshot of it. Docker manage images on a host and gives you the tools to transport them across hosts usually by registries.

    Containers: Containers are instances of images, basically copies. This is where your isolated process runs. To the process it looks like it has its own system based on the filesystem of the image. Docker manage containers as processes, means you can stop, start, run in the background, etc.

    Dockerfiles: Dockerfiles are build files that instruct Docker how to consistently build a container image

    Registries: Registries serve images you can pull from and put into from Docker. The Docker index is the main public registry

    How Docker works ?

    When you start a container is always based on an images, however the image filesystem is not actually copied, Docker containers uses a layered copy on write filesystem , this means the files you see in a container when it start are actually the files in the image filesystem until a change is made.

    Changes made in the container collectively made a layer and can be committed into an image.

    Images are just a collection of layers or filesystem changes. This allows Docker to be efficient with disk space, it is also why registries are important since they are aware of layers and can send layer you don’t have when you pull an image

    Containers are not just filesystem, they always run with a command. This command process run in isolation using kernel name spacing and cgroups, this not only sandbox the process but let us assign resources limitations to the process, this is known as Linux containerization or LXC.

    Docker containers are much more that Linux Containers, while Linux containers focus on process isolation, Docker containers build on this and provide a high level tool for a more portable and higher level container.

    VMs vs Docker Containers

    Docker commands and workflows

    Docker run

    Let’s start with some basic examples. Log back to the VM that we previously prepared with Docker and run the following command:

    docker run ubuntu ls


    With this command you are visualizing a listing of the directory inside the container. Please also note that the containers is stopped as the process has ended. We can see the container is still around using the docker ps -a command


    To remove the container you can run docker rm + the uid of the container


    With docker run rm option, the container will be automatically deleted after the process is ended.

    Let’s run another example with run to run vim editor


    As you can see, it can’t find vim inside this container. We will need to install it using apt-get install inside the container


    In this case the command is aborted because is waiting for confirmation, and we were not running the command interactively.

    Interactive commands are generally run with two options – i which is interactive and - t which emulates a text terminal. This way the command operates more like when we normally work at the terminal

    Let continue with the example adding the interactive options to the same command to install vim in a container:  docker run --rm -i -t ubuntu apt-get install vim


    But as you may realize that this command is somehow pointless if the container is removed after it finish install vim. I will address this later.

    Let’s continue with a more complex command to install vim and run it.

    docker run --rm -i -t ubuntu /bin/bash -c "apt-get install -y vim && vim"



    After the command is completed the container is removed


    Now let’s explore a command that runs in the background. This means we will be attaching to the process, this is frequently done for daemon process to run a server.

    In this example, I will install, setup and run an openssh server, I will be also using the port mapping option since we also want to connect the ssh server. Unlike when running in the foreground, running in detach mode returns the container id. Then you can use this to interact with the container

    docker run -d -p 2222:22 ubuntu /bin/bash -c "apt-get install -y openssh-server && mkdir /var/run/sshd && echo 'root:demo' | chpasswd && sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/g' /etc/ssh/sshd_config && sed -ri 's/UsePAM yes/UsePAM no/g' /etc/ssh/sshd_config && /usr/sbin/sshd -D"


    Now let’s ssh into the new container with ssh root@localhost -p 2222


    We just ssh into the container!

    All command executed here, will be sub process of our containerize command and will be also isolated in the container. Since our command is running indefinitely in detach mode we have to explicitly stop. I’m going to use Docker kill to do this, and manually clean with Docker rm as show here:


    In this post we learned how to run command using docker run as well some supportive commands , we saw how to run basic foreground commands , interactive commands with the sudo terminal and even how to run a daemon process in the background. This workflows are called foundations of using Docker.We will continue to explore in the next post more advance docker commands and options. Click here to continue to the next post!!


    Click here to see the inital post

    6 Node GlusterFS cluster and dd

    IMG_3369Quick and dirty geek post today! Just for Benjamin who was eager to see how I killed my Laptop! 🙂

    In the aforementioned laptop disaster I had build 6 CentOS VMs with 2GB RAM each in Virtualbox. The laptop itself sports a modest 8GB of main memory. While I built the VMs and set up GlusterFS all was fine. When I ran dd to create a 2GB file full of zeroes I had expected some serious swapping but nothing more. I was in for a surprise when I was greeted with a black screen, an error message and shortly after that, a reboot!

    Either way, The All-In-Onster to the rescue!!! 😀

    Apologies for the slight blur, this is my first screen capture, I had forgotten to enable Do-Not-Disturb and got various notifcations, I had to do some editing trickery! Not a master of said art, thus rather low video quality! 🙂

    Click here to see the inital post

    Homelab - The All-in-Onster Part IV - Docker and KVM


    Today's post will be short and snappy, lots of code, little boring text. I will just be installing KVM and Docker, marry both with our ZFS installation and run a quick test on each just to make sure everything works as expected.

    Click here to see the inital post

    DevOps in 30 min. Ansible


    Ansible is simple, has a very low learning curve, and is just pleasant and intuitive to use. You don't have to install a client or anything special on the children nodes (assuming they have Python 2.6 or greater installed - which most popular Linux distros do), so you can be up and running nearly instantly. This is particularly useful if you already have legacy systems in place - you don't have to mess with installing and managing a special child-client service on your existing servers.




    Execution Order

    Ansible executes the directives in the order you'd expect: sequentially as they're written in your directives script.



    YAML and Jinja2. Both are very simple and easy to learn. This makes Ansible very accessible for developers of all languages.


    Remote Execution

    Remote execution functionality is built-in and available immediately - again, without even having to install anything on the children nodes.



    Directives = Tasks

    Directives Script = Playbook

    Master Node = Control Machine

    Children Nodes = Hosts


    Node Setup

    Make sure you first set up your servers according to the instructions in the previous post.


    Install Ansible on master node

    Let's get started with the walk-through by setting up the master node:

    root@master:~# apt-get update

    root@master:~# apt-get install python-software-properties -y

    root@master:~# add-apt-repository ppa:rquillo/ansible -y

    root@master:~# apt-get update

    root@master:~# apt-get install ansible -y


    We run the apt-get update twice since we need to get the updated package lists first in order to install python-software-properties and then again to update the package lists for the rquillo/ansible repository we added. The installation documentation is at:


    Tell the master node about the children

    Ansible requires a hosts inventory file to be set. This file specifies the children nodes to connect with. Optionally, you can also specify connection options for the servers and group them logically if needed.

    Grouping the servers allows you to target specific groups of servers as you'll soon see. The file is in the INI format.

    So, for this project let's do a 'Dschinghis' group and a 'Khan' group to allow us to target them easily.

    Create /root/inventory.ini with this content:



    The inventory documentation is at:


    Setup connectivity to the children


    Usually you will already have access to your servers via a method like ssh keys, so you often

    won't even need this step. If you're working on legacy systems this is especially great since

    you can be up and running without having to install anything on the children nodes.


    Now, before we start working with the children nodes, we want to be able to connect to them without having to enter in a password every time. So let's use ssh keys to connect to the children nodes.


    Generate the ssh key on the master node:

    root@master:~# ssh-keygen -t rsa -C "" -N "" -f /root/.ssh/id_rsa


    Now let's view the content of our new public key ( /root/.ssh/ ) so we can put it on the children nodes:

    root@master:~# cat /root/.ssh/


    Copy the contents of from the master server and then paste it into /root/.ssh/authorized_keys on both the and servers.

    Note: Unfortunately, ssh-keygen doesn't have verbose flags (like --long-flag-name ),

    so here's a key to the flags we used:

    -t specifies the encryption type to use.

    -C specifies the comment, which is typically your email address.

    -N specifies the passphrase to use. Never use an empty passphrase for production keys.

    -f specifies the location to save the generated keys. This also generates the public key in the same directory.


    Now you can test the connectivity:

    root@master:~# ansible all --module-name ping --inventory-file=/root/inventory.ini | success >> {

    "changed": false,

    "ping": "pong"

    } | success >> {

    "changed": false,

    "ping": "pong"



    Let's look at the parts of that command in more detail:

    ansible all runs Ansible against "all" of the children nodes (as opposed

    to a subgroup of them).

    --module-name ping runs the ping module on the children nodes for a quick

    ping/pong connectivity check. Often --module-name is abbreviated to just

    -m .

    --inventory-file=/root/inventory.ini specifies the file where the list of

    children nodes is defined.


    If you don't set up ssh keys, but still try to connect to the children, you'll get an error like:

    root@master:~# ansible all --module-name ping --inventory-file=/root/inventory.ini | FAILED => SSH encountered an unknown error during the connection. We

    recommend you re-run the command using -vvvv, which will enable SSH debugging output

    to help diagnose the issue



    We don't want to have to specify the inventory file every time, so let's

    add that as a setting in Ansible's configuration.

    Create /root/ansible.cfg (INI formatted) with the setting to define the

    location of the inventory file:


    hostfile = /root/inventory.ini

    Now we can save a little typing:

    root@master:~# ansible all -m ping | success >> {

    "changed": false,

    "ping": "pong"

    } | success >> {

    "changed": false,

    "ping": "pong"


    This error occurs since (without ssh keys) you need to install the sshpass package and

    then add the --ask-pass flag to the ansible command:

    root@master:~# apt-get install sshpass -y

    root@master:~# ansible all --module-name ping --inventory-file=/root/inventory.ini


    If you still have problems, then follow the error message's advice to add -vvvv to the end of

    the command so you will get the verbose connection debugging output.


    The configuration documentation is at:


    Remote execution

    Ansible gives you remote execution capabilities right out of the box.

    Here's a quick example:

    root@master:~# ansible all -m command -a "date" | success | rc=0 >>

    Sat Nov 10 23:13:19 EDT 2015 | success | rc=0 >>

    Sat Nov 10 23:13:18 EDT 2015


    Ansible has default locations where it automatically looks for the inventory and

    configuration files.

    Had we just put the inventory file in the default /etc/ansible/hosts location, then we

    never would have needed to specify --inventory-file=/root/inventory.ini .

    However, we set it in a custom location so I could show you how to set it in the configuration


    Ansible automatically picks up the configuration data in this order:

    ANSIBLE_CONFIG (an environment variable)

    ansible.cfg (in the current directory)

    .ansible.cfg (in the home directory)



    If I wanted to run the command on just the 'Dschinghis' group of servers,

    you'd do:

    root@master:~# ansible Dschinghis -m command -a "date" | success | rc=0 >>

    Sat Nov 10 23:14:06 EDT 2015


    Targeting server groups comes in handy for real-life scenarios, since you'll often want to

    group and target your servers by their function (webserver, db, cache, etc):




    Then you can target them by group:

    root@master:~# ansible webservers -m command -a "date"


    Another great feature is the documentation via the command line:

    root@master:~# ansible-doc --list

    accelerate Enable accelerated mode on remote node

    acl Sets and retrieves file ACL information.

    add_host add a host (and alternatively a group) to the ansible-playbo

    airbrake_deployment Notify airbrake about app deployments

    apt Manages apt-packages

    apt_key Add or remove an apt key

    ...output truncated...

    root@master:~# ansible-doc apt

    > APT

    Manages `apt' packages (such as for Debian/Ubuntu).

    Options (= is mandatory):

    - cache_valid_time

    If `update_cache' is specified and the last run is less or

    equal than `cache_valid_time' seconds ago, the `update_cache'

    gets skipped.

    ...output truncated...


    Setting up the directives

    Now we're ready to set up the directives to install everything.

    You'll notice that a directive is just a module that is passed the

    parameters you specify.

    The documentation for Ansible directive modules is at:


    nginx package

    We know we'll need to put our image and html files in the nginx web

    root directory, so let's install nginx first.

    First, we'll create the directives script called taste.yml in /root and add

    the nginx directive:


    - hosts: all


    - name: ensure nginx is installed

    apt: pkg=nginx state=present update_cache=yes

    We want nginx to be installed on all the children nodes so we set the

    hosts to all . (The earlier grouping we did in the inventory file could be

    used here as well if we wanted to target a specific group of servers.)

    The tasks section is where we put our directives for this set of hosts.

    The name can be any text that is helpful for you to remember what the

    directive does.

    The apt line is the actual directive (module + parameters) that will be


    You can see that we're just using the 'aptitude' package manager module

    ( apt ) to ensure nginx is installed. We add the update_cache=yes parameter

    so that apt-get update is performed before nginx is installed.


    First run

    Let's run this against the children nodes now:

    root@master:~# ansible-playbook taste.yml

    PLAY [all] ********************************************************************

    GATHERING FACTS ***************************************************************

    ok: []

    ok: []

    TASK: [ensure nginx is installed] *********************************************

    changed: []

    changed: []

    PLAY RECAP ******************************************************************** : ok=2 changed=1 unreachable=0 failed=0 : ok=2 changed=1 unreachable=0 failed=0

    Nice - it installed smoothly.


    Image files

    Now, let's set up the image files.

    Download the image files to the master node in /root

    root@master:~# wget

    root@master:~# wget


    Now, update the taste.yml file to include the image directives:


    - hosts: all


    - name: ensure nginx is installed

    apt: pkg=nginx state=present update_cache=yes

    - hosts: Dschinghis


    - name: ensure Dschinghis.jpg is present

    copy: src=/root/Dschinghis.jpg dest=/usr/share/nginx/www/Dschinghis.jpg

    - hosts: Khan


    - name: ensure Khan.jpg is present

    copy: src=/root/Khan.jpg dest=/usr/share/nginx/www/Khan.jpg

    You can see now we're using hosts to target which servers the directives

    get run on. You'll recall we added a Dschinghis and a Khan group in the

    /etc/ansible/hosts file earlier which allows us to do this.

    Let's run the new directives:

    root@master:~# ansible-playbook taste.yml

    PLAY [all] ********************************************************************

    GATHERING FACTS ***************************************************************

    ok: []

    ok: []


    Instead of downloading the images and using the copy directive, we could have used the

    get_url directive.


    TASK: [ensure nginx is installed] *********************************************

    ok: []

    ok: []

    PLAY [Dschinghis] ******************************************************************

    TASK: [ensure Dschinghis.jpg is present] *******************************************

    changed: []

    PLAY [Khan] ******************************************************************

    TASK: [ensure Khan.jpg is present] *******************************************

    changed: []

    PLAY RECAP ******************************************************************** : ok=3 changed=1 unreachable=0 failed=0 : ok=3 changed=1 unreachable=0 failed=0

    You can see from the output that Ansible put the images on the correct



    User / Group and ownerships

    Now, we need to create the Dschinghis/Khan groups and users so we can

    update the image file ownerships.

    Remember that Ansible runs the directives sequentially, so we'll have to

    put the directives in this order: Create group > Create user > Change file



    - hosts: all


    - name: ensure nginx is installed

    apt: pkg=nginx state=present update_cache=yes

    - hosts: Dschinghis


    - name: ensure Dschinghis group is present


    group: name=Dschinghis state=present

    - name: ensure Dschinghis user is present

    user: name=Dschinghis state=present group=Dschinghis

    - name: ensure Dschinghis.jpg is present

    copy: src=/root/Dschinghis.jpg dest=/usr/share/nginx/www/Dschinghis.jpg

    owner=Dschinghis group=Dschinghis mode=664

    - hosts: Khan


    - name: ensure Khan group is present

    group: name=Khan state=present

    - name: ensure Khan user is present

    user: name=Khan state=present group=Khan

    - name: ensure Khan.jpg is present

    copy: src=/root/Khan.jpg dest=/usr/share/nginx/www/Khan.jpg

    owner=Khan group=Khan mode=664

    We can specify the file ownership with our existing copy directive, so

    we've just used that.

    Now run the new directives:

    root@master:~# ansible-playbook taste.yml

    ...output omitted...


    HTML template

    Now, we'll make the html template with the Jinja2 templating language.

    Create the html template as index.j2 in /root and add these contents:


    <body bgcolor="gray">


    <img src="/{{bebe}}.jpg">





    You'll notice the 'bebe' variable that I set in the Jinja2 syntax with double

    curly brackets. That variable will be parsed when the template is


    We'll declare that variable in our directives script like this:

    - hosts: Dschinghis


    bebe: Dschinghis



    Now let's add the directive for the template.

    The full taste.yml now looks like:


    - hosts: all


    - name: ensure nginx is installed

    apt: pkg=nginx state=present update_cache=yes

    - hosts: Dschinghis


    bebe: Dschinghis


    - name: ensure Dschinghis group is present

    group: name=Dschinghis state=present

    - name: ensure Dschinghis user is present

    user: name=Dschinghis state=present group=Dschinghis

    - name: ensure Dschinghis.jpg is present

    copy: src=/root/Dschinghis.jpg dest=/usr/share/nginx/www/Dschinghis.jpg

    owner=Dschinghis group=Dschinghis mode=664

    - name: ensure index.html template is installed

    template: src=/root/index.j2


    - hosts: Khan


    bebe: Khan



    - name: ensure Khan group is present

    group: name=Khan state=present

    - name: ensure Khan user is present

    user: name=Khan state=present group=Khan

    - name: ensure Khan.jpg is present

    copy: src=/root/Khan.jpg dest=/usr/share/nginx/www/Khan.jpg

    owner=Khan group=Khan mode=664

    - name: ensure index.html template is installed

    template: src=/root/index.j2 dest=/usr/share/nginx/www/index.html

    Then run the new directives:

    root@master:~# ansible-playbook taste.yml

    ...output omitted...


    Run nginx

    The last thing we need to do is ensure nginx is running so we can browse to our Dschinghis/Khan sites. Update this part of taste.yml :

    - hosts: all


    - name: ensure nginx is installed

    apt: pkg=nginx state=present update_cache=yes

    - name: ensure nginx is running

    service: name=nginx state=started

    Run the new directive:

    root@master:~# ansible-playbook taste.yml

    ...output omitted...


    Now we can browse to our Dschinghis/Khan sites!



    Ansible has the lowest learning curve of all the CM tools, so if you found

    this chapter at all challenging, you should use Ansible and not even

    consider the other tools.

    For convenience, here's the full final taste.yml with some added

    whitespace and comments for clarity:


    # Directives for all children nodes

    - hosts: all


    - name: Ensure nginx is installed.

    apt: pkg=nginx state=present update_cache=yes

    - name: Ensure nginx is running.

    service: name=nginx state=started

    # Directives for Dschinghis node

    - hosts: Dschinghis


    bebe: Dschinghis


    - name: Ensure Dschinghis group is present.

    group: name=Dschinghis state=present

    - name: Ensure Dschinghis user is present.

    user: name=Dschinghis state=present group=Dschinghis

    - name: Ensure Dschinghis.jpg is present.


    copy: src=/root/Dschinghis.jpg dest=/usr/share/nginx/www/Dschinghis.jpg

    owner=Dschinghis group=Dschinghis mode=664

    - name: Ensure index.html template is installed.

    template: src=/root/index.j2


    # Directives for Khan node

    - hosts: Khan


    bebe: Khan


    - name: Ensure Khan group is present.

    group: name=Khan state=present

    - name: Ensure Khan user is present.

    user: name=Khan state=present group=Khan

    - name: Ensure Khan.jpg is present.

    copy: src=/root/Khan.jpg dest=/usr/share/nginx/www/Khan.jpg

    owner=Khan group=Khan mode=664

    - name: Ensure index.html template is installed.

    template: src=/root/index.j2 dest=/usr/share/nginx/www/index.html

    Click here to see the inital post

    Container Orchestration using CoreOS and Kubernetes

    A quick post, on a not to be missed workshop on modern container orchestration using CoreOS and Kubernetes.

    This meetup event was organized by Brian Redbeard from CoreOS:

    "The last couple of years have seen a fundamental shift in application automation built on top of Linux containers. As more organizations take containers into production, the demand for container orchestration has grown tremendously. The number of tools to manage Linux containers has exploded and luckily today there are a variety of ways modern orchestration can be accomplished.

    Enter Kubernetes, the container cluster manager from Google, and CoreOS, the Linux OS designed for running containers at scale. When combined Kubernetes and CoreOS deliver an ideal platform for running containers and offers a deployment workflow that puts the focus on applications and not individual machines.

    This hands-on workshop will teach modern practices for container orchestration, and show examples of how components work together to manage a cluster of Linux containers. With its ability to power infrastructure in the cloud or on bare-metal, the session will use Kubernetes with CoreOS as an example showing attendees how to deploy and manage a multi-tier web application."

    Container Orchestration using CoreOS and Kubernetes, Part 1/3 workshop

    Container Orchestration using CoreOS and Kubernetes, Part 2/3 workshop

    Container Orchestration using CoreOS and Kubernetes, Part 3/3 workshop

    And this is it for now. Keep tuned@

    Click here to see the inital post

    DevOps in 30 min. The Lab Setup

    You can build your lab environments using Virtual Box , Vagrant, or in public cloud using AWS, Azure or even RavelloSystems. The lab for our DevOps lab is independent of the platform that you prefer to use.

    I'll be launching 3-4 fresh VMs on VMware Workstation and they will all use Ubuntu 14.04 x64:

    • master - server
    • Dschinghis - node
    • Khan - node

    Chef is an exception since it also requires a machine to interact with the master. We'll be setting up an additional VM as a 'workstation' for that purpose with Chef. Also, for most of the VMs, you can use the 512MB RAM. However, for the Chef master VM, you'll need at least 1GB of RAM. To simplify the setup I will set all the VMs to 1024MB Ram



    To keep the setup as minimal and simple as possible, we'll just use /etc/hosts to set the DNS on each of the VMs, like so:


    I suggest also putting the same entries in your local /etc/hosts for convenience. Of course, replace the example 192.168.71.* IPs in the example with your VMs' actual IP addresses.

    Then you'll be able to connect to your VMs via:

    > ssh

    > ssh

    > ssh

    > ssh


    Remember that if you will use the same hostnames like I am for the different DevOps tool VMs scenarios, then you'll want to delete the VM entries from your ~/.ssh/known_hosts file so you don't get warnings when trying to log into the VMs.

    Each DevOps tool requires certain ports to be open. I've already verified that all ports are open on the Ubuntu VMs with:

    > iptables --list --numeric


    Use the --numeric flag since you just want the IP addresses and don't want to do hostname resolution for the IP's.

    Chain INPUT (policy ACCEPT) target prot opt source destination

    Chain FORWARD (policy ACCEPT) target prot opt source destination

    Chain OUTPUT (policy ACCEPT) target prot opt source destination

    That output shows that all the ports are open.

    Reference about iptables here


    sudo / root

    Because these are throw-away VMs, we'll just be running everything as root. When we come across instructions in the DEVOPS tool docs that suggest using sudo, we'll just use the sudo for the commands we run. Naturally, in production you should use a more secure setup (like sudo with limited-privilege users).


    Important reminder

    Now the lab is ready to start!

    Just remember that when you destroy and rebuild your VMs in order to run the different DEVOPS tools that you'll need to:

    1. Clear the relevant entries in your ~/.ssh/known_hosts file
    2. Update your local /etc/hosts file with the new IP addresses
    3. Update the /etc/hosts file on each server with the new IP addresses

    If you only do a "snapshot and restore" on your VM rather than a destroy rebuild, then the IP address will be the same and you can skip steps #2 and #3.

    Very simple and easy to manage virtual lab. And this is it for now. Keep tuned@

    Click here to see the inital post

    DevOps: Ansible, Chef, Puppet, SaltStack, Docker in 30 min.


    The aim of this new section of the blog is to provide you with a quick tour of what it's like to work with the most popular Configuration Management tools like Puppet, Chef, Salt, and Ansible.

    Using one of this tool is a huge win for your productivity, speed, and sanity.

    Exactly why is using a CM tool so powerful? Well, without one, your systems are destined for chaos. Chaos in your systems will leak slowness and misery throughout your company. It will start to feel like you've been cursed because everything becomes so difficult and bad luck seems to pervade everything. You would think you had built your company on some ancient evil burial ground.

    Sample Project

    I walk you through an identical sample project - Dschinghis Khan - website, with each of the four tools. The idea is that you should be able to set up each tool with the sample project in under 30 minutes each.
    I only used the official documentation and whatever I could find via Google.


    This isn't a deep exploration of these tools. Instead, I aim to give you a great head start by saving you the weeks of research you might have spent trying out the tools in order to choose one. If you can quickly choose a tool, you can get on with the business of making your systems more awesome.

    This is not a bar brawl where we pit the tools against each other - it's more like a wine tasting.
    The people that have built these tools I've found to be universally excellent, generous, kind people. They've helped create a wonderful DevOps community that is good-humored and generally free of the negativity you might see in other tech communities.

    That's not to say that they aren't true competitors. Each tool has a venture-backed company behind it. They absolutely are competing - but it's very much friends competing with friends rather than some kind of bitter war.

    Keeping a great vibe in the DevOps community is essential to attracting people from the biggest competitor:  Using No CM Tool At All.

    Fundamental Differences

    Before we get started, there are a few differences that deserve covering here.

    Directive Ordering

    In Ansible, Chef, and Salt the directives run in the exact order you put them in.
    However, Puppet doesn't run the directives in the order you would expect. Instead, it compiles all your directives into its own internal ordering. That means you need to explicitly define dependencies between the directives in order for them to run in the correct order.
    There are academic arguments on why this is a feature, though in practice it can be a real pain when dealing with systems that have much complexity. In the Puppet post I will discuss this more and give some examples.

    Directives Language

    Ansible and Salt use the standard YAML format and the simple Jinja2 templating language (which really is super simple, despite the ominous-sounding name). Both are easy to learn. They make Ansible and Salt highly accessible for developers of all languages. Chef uses Ruby with an extended DSL (domain specific language). While this is very powerful and convenient for Rubyists, it makes things a little more challenging for non-Ruby developers. Fortunately, Ruby is a simple elegant language that is easy to learn. Puppet uses its own custom configuration language. It's not difficult, but does add to the learning curve.

    Master / Children nodes setup


    Ansible has the simplest setup and uses SSH to connect the children nodes. You only install Ansible on your master node (which can just be your laptop since Ansible just uses SSH to push the directive commands out to the children). There's no special client that needs to be installed on the children nodes.


    By default, Salt uses a standard Master / Children nodes setup. This requires installing a special client on each child node. Each child node pulls the directives from the master node and then runs the directive commands.
    Salt has recently added a version of an SSH push mode similar to Ansible's, so you now have the option of running Salt without having to install a special client on the children nodes.


    Chef uses a fairly standard Master / Children nodes setup, but also adds the concept of a workstation node which interacts with the Master node. The workstation node is generally your local machine like your laptop or desktop.

    Chef has the most challenging setup of all the tools. Chef Software, Inc. sells a hosted master server solution which is free for 5 children nodes or less. Then the price is $120 for 20 children, $300 for 50 children, and $600 for 100 children (as of Oct 2015).

    I fully support Chef Software, Inc. making money on this, but their documentation is nearly all geared to using their hosted solution,  which makes it unnecessarily difficult to set up your own Chef master node. Due to security concerns, many companies will not want to use a hosted master node service, so having good documentation is essential but lacking. The Chef master node also requires a good deal of RAM in order to be installed. Of all the CM tools, only Chef ran out of memory when installing the master node on a 512MB RAM server. Once I increased the RAM to 1GB, then the installation scripts had enough memory to install
    successfully. As mentioned, Chef also requires a workstation node. Setting up the workstation is non-trivial and can be a pain especially if you have to set it up for multiple engineers. Since you're not running the commands from the master node directly, it adds to the number of steps you need to perform in your regular workflow to run directive scripts against your children nodes.
    Because of these extra challenges in setting up Chef, many companies choose to bypass using a master and workstation node and just distribute their directive scripts to the children via some other means (capistrano, git, etc). They then run Chef in isolation on each child node. This is generally referred to as a "chef solo" setup.


    Puppet has a standard Master / Children nodes setup. This requires installing a special client on each child node. Each child node pulls the directives from the master node and then runs the directive commands. Puppet ships by default with a web server for the master node that doesn't scale well, so they suggest installing a more robust web server (apache, nginx, etc) to replace the default. It's yet another step you have to consider when using Puppet

    Remote execution

    Remote execution is the ability to run commands against your children nodes. For example, if you wanted to find out the date/time setting for each child node, you would want a way to execute a command like date on all of your children nodes and receive the output without logging into each of them individually. Tools for this include Fabric, Capistrano, and Func.
    Ansible and Salt have robust, easy-to-use remote execution built-in and immediately available after installation.
    Chef has a tool called 'knife' that is used for many purposes including remote execution, but it can be challenging to configure and feels clunky compared to Ansible/Salt.

    Puppet doesn't have an included tool for this, but suggests using mcollective which can be difficult to install, configure, and learn.

    Up next

    We'll go through a super simple multi-server project that allows me to demonstrate some of the key features of the tools.
    Since we set up an identical system with each tool, it will give you a good taste of how each tool handles the job.
    I'll take you step-by-step through the exact commands and directives to implement the project. That way you can follow along and get a sense of how each tool works.

    The order of the post are by their learning curve, with the easiest tool first. While Chef was the most difficult tool to set up, I put it before the Puppet because managing dependencies in Puppet substantially increases its learning curve.
    This section of the blog is a quick short-cut to experiencing these tools. Rather than digging through documentation and going down false paths, I've already done all that grunt work for you. I'll show you the easy way through and will warn you about the rough spots so you can avoid them.

    And this is it for now. In the next post I will show the pre-setup of the DevOps lab. Keep tuned@

    Click here to see the inital post

%d bloggers like this: