A good idea often has many parents. This is how it is for Docker.
Docker looks like a trending, shiny new piece of technology, and a required part of a software developer’s vocabulary. To be used liberally.
But the idea behind Docker - containerisation - is not new and the same concept is used even in proprietary software. Docker does appear to have the most accessible implementation to date, though.
Their logo, a whale loaded up with containers is a perfect image for this company and its main product. Those of us that have used Solaris may recognise these ‘containers’ as ‘zones’,
which have been in Solaris for some time. Mainframe people may even see a glimmer of IBM MVS.
Let me describe the main idea behind Docker.
For maybe 10 years, people have been using Virtual machines (VM). Each VM contains a complete copy of an Operating System (OS). This is flexible, in that each VM can have a different
OS and be completely isolated from all other VMs running on the same server. The drawback is that, on the host, we have the overhead of each VM having a whole OS.
The Docker concept is not to repeat the OS for each container. The container shares the OS in an isolated way via the Docker layer. This results in significantly less overhead, to the extent that
we could install a single service in each container, architecting the application in a collection of micro-services.
Using VM’s, this would not be possible. If we had a service per VM on a single server, the penalty for running each VM's guest OS would be an inefficient use of server resources.
In a VM architecture we would use a system container style, having all the services of an application running a single VM. In a Docker architecture, we can afford to have a service per container.
Docker appears to be a godsend for application development companies that are moving to the cloud. In the 'old' days (a couple of years ago), the best we could do was to use scripting (Poppet) to
configure servers (VM’s) to meet our requirements in the cloud or a real data centre. With Docker, we can exactly specify the contents of a container, and the exact versions of required packages per container.
This can be important in Open Source projects, where we can easily have services dependent on different versions of OS packages. When Using VM’s, it’s tricky to provide different package
versions to each service in the same VM. In the Docker case, all the containers are sharing the same kernel. If a service required a specific kernel patch, all containers would have to be happy with that change or be on a different host.
Docker makes deploying solutions from development to production - especially in the cloud - a lot more reliable. The container build instructions are scripted, so every aspect is defined in a stable and auditable description.
Other tools also apply this motif to make better use of limited resources. An example is Oracle’s 12c multi-tenant. We have an Oracle container instance which is able to host multiple pluggable Oracle databases.
Previously, each of the databases would require its own instance, with hundreds of processes and a full complement of memory for caching.
This would mean it would be difficult to run several databases on one server. With the introduction of multi-tenant we can share the instance between what appears to be isolated multiple Oracle databases.
At my company we leverage these two exciting technologies to provide increased reliability, speed of deployment and performance. We are able apply these tools in the cloud and traditional customer hosted data centres.
Comments
Post a Comment