We @ Spritle came across this wonderful new tool recently. Although it took time to taste it (atleast to some extent), I soon realized the awesomeness of Docker. So I wanted to write a blog on what Docker is and the Dummy’s guide to using it (no offence intended to the readers ;). So what is Docker? Here’s what Docker.io says,
Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.
– Docker.io
So basically docker is a tool to create and work with containers. Those who are familiar with the Linux Kernel would know what a container is. For those who aren’t, here’s Wikipedia’s definition of Linux Containers:
LXC (LinuX Containers) is an operating system-level virtualization method for running multiple isolated Linux systems (containers) on a single control host.
– Read more
Yes. LXC is a way to run mini operating systems in your Linux host operating system. But dont we already have Virtual Machines for that? Most of you should be familiar with names like Virtual Box, VMWare etc. that allow you to run an operating system inside another operating system. So what’s different about containers then? Well if you had read that wiki article on LXC you would have understood a little. Containers are extremely light weight when compared to Virtual Machines. VMs run full fledged operating systems on your host OS. If you want to run 5 VM’s and if each VM requires about 2 GB of disk space then you’ll lose 10 GB of disk space. But Containers share the Operating System. They just have isolated process spaces. And the best thing about containers is the fact that they use a layered file system called AuFS. This means that containers can share the common base stuff between them. All this makes them much lighter than Virtual Machines. I’ve just given you a basic overview of the differences between LXCs and VMs. If you’re interested just do a Google search and you’ll find loads of stuff.
Okay now we know what containers are and what Docker helps us to do. There’s one more thing your mind is asking you now. What’s the use of Docker and containers? Here are three:
Throwable Sandboxes
Containers allow you to create simple, lightweight, safe and throw-able sandboxes. If you aren’t sure of something and are paranoid, you could simply create a container and try it out there safely. Once you are satisfied you could just tear it down! There is no need to install Virtual Box and download an OS image and wait for several minutes to boot it up etc. Neat! Right?
Elegant Application Delivery
Containers allow you to package just about any application. You could add the dependencies of the application in the container itself. This way you could save your customers or users the time and frustration involved in finding and installing dependencies of your application. You could open ports on the container to allow the application to communicate with the applications on the host. All this allows you to streamline application delivery. For example, you could have a container for MySQL. People instead of installing MySQL into their systems can use your container. Your applications can connect to the MySQL running inside the container. Well, this is just an example. Ofcourse most people have MySQL installed on their systems. But this is an illustration of how you could package your application in a container.
Create uniform development and production Environments
Containers allow super elegant deployment of your applications and having uniform environments in your development, test and production systems and also in all the systems of your team mates. Lets say you and your colleague are developing a Rails project. You have Ruby 2.0 and he or she has Ruby 1.9.3. Then you happen to use the line to `%i(foo bar)
` to declare a symbol array. Now when your colleague runs the project they get a Syntax Error
!. But you realize that the syntax isn’t wrong. Then after few minutes you realize or find that %i
was introduced only in Ruby 2. Now you have to decide if you want to change your code or make your colleague upgrade Ruby! 😀 Several precious minutes are wasted in the process. Docker comes to the rescue here! You just have to create container with the environment for your new Rails project and let all your colleagues use it. So all your environments would be exactly the same! And moving over to the Production side of things, you could easily load your Docker onto your server and run the application. So need to install everything once again and setup the environment in your production server. Also more and more Cloud Application platforms like RackSpace, Digital Ocean, Linode, etc are providing support for deploying your Docker containers. So its a win win for all!
Well these are just some of the use cases. I’d like to allude to the popular indie game Minecraft! If you had played it you would know that there is no single defined goal or objective in the game. The possibilities of what you can do in Minecraft is endless! Similarly the things you could do with Docker and Linux containers are so much. Everyday people are building amazing things with Docker. I’d like to share a project by Jeff Lindsay called Dokku. He has used the awesome and elegant Docker to create a mini Heroku! (for those who donot know what Heroku is, read this and then this) Those who have used Heroku would know that building something like is not a joke. But Docker’s API does much of the Herculean heavy lifting and allows you to do all these really cool stuff.
Alright. Its all great! But these are actually the uses of Linux Containers. What does Docker provide you? Well, I’d like to requote the definition at Docker.io again if you dont mind.
Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.
– Docker.io
So now it all makes very good sense. Lets look at how Docker works. Docker actually uses a Client/Server Architecture. Docker runs a daemon in your system. This process is responsible for creating and managing the containers. You interact with this daemon with the Docker CLI (command line interface). The Docker CLI provides you with a very simple and comprehensive set of commands that will allow you to create and manage containers. Docker is not all about creating and running containers in your system. Docker provides a way to share the containers that you have created on your computer with the rest of the world. To facilitate this the people behind Docker have setup a public index (Docker Index) where you can push your containers to and pull containers built by others. Docker makes containers reusable. Great isn’t it?
Just to illustrate the elegance and ease in using Docker let me show you few commands that Docker provides.
Lets say you want to pull a container from the Docker Index. The command you would use is docker pull. For example if you want to pull the base Ubuntu container you would use,
[sourcecode]docker pull ubuntu[/sourcecode]
In order to run a process or command in a container you use the docker run command. The usage syntax for run command is,
[sourcecode]docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG…][/sourcecode]
Lets say you want to install the cURL package inside your base ubuntu image. You would use,
[sourcecode]docker run -t -i ubuntu apt-get install curl[/sourcecode]
The -t option tells the run command to allocate a pseudo terminal so you can see the output and the -i option lets us interact with the command if needed. We are using the -i (interactive) option with apt-get install since it asks us for confirmation. But if you want to do a non interactive or unattended install you could add the -y option to apt-get. Great now we have curl installed in our ubuntu base image.
Lets use curl now. Ok lets use curl to pull www.google.com. So we use docker to run the curl command as follows.
[sourcecode]docker run -t ubuntu curl www.google.com[/sourcecode]
And you get ‘Unable to locate curl‘. We just installed cURL to ubuntu. Why isn’t it working then? Well the answer lies in the way Docker saves images. Whenever you make changes to an image/container, the changes are not stored in the base image. Instead the change (only the change!) is stored in a new image and these images are given ID tags. So what you have to do is run the curl command on the newly created image. How do you find the ID of the image which stored your curl installation? To do that we need to use the ps command. Run this.
[sourcecode]docker ps -a[/sourcecode]
This will give you a list of images, their IDs and the command that was run. In your list you will probably see an image with the command `curl www.google.com` on top. This is the image that was created when you ran the curl command on ubuntu unsuccessfully (hence the exit status 127 [command not found]. If it was successful exit status would be 0). The image you need is the one where you installed cURL. You will have the command as ‘apt-get -y install c‘ or something very similar to that. Inorder to use this image to run subsequent commands, we need to save it or in other words commit it. You use the docker commit command to do that. You need to pass the ID of the image and a name for the container as an argument. You need not copy the entire ID string. You can just use the first 3 to 4 characters. As for the name, the format is <top level name>/container-name. Eg. tutorial/curl. Lets go ahead and commit our work.
[sourcecode]docker commit 5c51 tutorial/curl[/sourcecode]
(use your ID inplace of 5c51).
Great! You’ve committed the image with the name tutorial/curl or whatever name you gave. When you run docker images, you will see your image among the list of available repositories. Ive used the word repositories here. A repository is a collection of images. So in our repository we would have the base ubuntu image and then the image we got by installing curl. Thus we have a layered system. Now let us run the curl command on our new container.
[sourcecode]docker run -t tutorial/curl curl www.google.com[/sourcecode]
Did you get the google.com HTML response with the 301 message? Great. It works! By the way, you don’t get the actual google.com HTML because Google redirects you to your regional google.com website. A browser would automatically interpret a 301 redirection message and redirect you but since we are using cURL we dont have that luxury.
I think now you should have a good idea of what Docker is and how you would use it. Now is a good time to read this awesome page at Docker.io. Docker.io also provides awesome and simple tutorials on how to use Docker to create and manage containers and to push them to a Docker Index. This is a very good place to start learning Docker. They also have an interactive tutorial. Go ahead start using Docker!
Hope this post kindled your interest in Docker. In a subsequent post we’ll see how to setup a container with a solid Rails development environment on Ubuntu 12.04 using something called a Dockerfile.
Hi Recent convert to Fedora and very curiuos about your spin on things. I was wondering to use your script do I need to have Gnome as the DE on my Fedora 17 install or can I run it with XFCE? Thanks Very Much