Docker
Virtualization to Containerization
If you haven’t heard about docker already, it’s only a matter of time. Docker and Kubernetes are the modern day container systems for storing and managing your own data, or more importantly a team or companies data. But first how did we arrive at Docker?
History!
In the beginning…
Back in the day large corporations would buy up and store all their data on good old fashion servers. While necessary these servers were inefficient. For starters, each operating system needed its own network, each application would normally get assigned its own server network to compensate for growth, then they would get built vertically. This process often forced companies to over-allocate costly servers in order to ensure they could scale over time.

Along comes VMware, a tech company with the promise of a new solution. Virtualization, An innovative idea which involved “virtualizing” the hardware. The idea was that instead of buying more physical servers for each operating system, VMWare’s “Hypervisor” sat on top of the hardware and would allocate an individual server’s space to run different OS’s on the same machine. These are referred to as virtual machines. I’ve made an infographic below to help visualize this shift.

Virtualization was a great idea. It allowed companies to run smaller applications alongside each other on the same server in isolation. However, it was still expensive. For starters each operating system still needs its own Kernel(computer program that facilitates interactions between hardware and software components). Also RAM allocation for virtual machines was still considerable.
Enter Docker…
In 2013 Docker was released onto the world. Where previously Hypervisor was installed to manage the hardware’s OS management, Docker is much less invasive. Quickly installed on top of your master operating system Docker is lightweight and just works as containers for your different applications. Hence the whale and cargo logo. Don’t let “containers” fool you though. These are still powerful incredibly fast environments for your applications to run in. They also come with the added benefit of not needing to worry if your application will run on someone else's computer, because it deploys portable images. More on that below.

Let’s get familiar with the Docker lingo.
Images - the name is misleading but I like to think of these as a captured instance of your application. Images are basically your application’s barebones or blueprints bundled up in ready to ship packages. They must contain all the dependencies necessary to run your application. We call all these dependencies and parts of the image layers as they are layered on one another and then shipped out as a completed image that can then get pulled down from a registry. Just to clarify the image is not your entire full fledged application. But rather the minimum instruction on what it needs to run and how much CPU and RAM will be necessary. These images can then easily be shipped out to multiple servers to run the same configurations. People often reuse common layer stacks that work well together like the “LAMP stack” (Linux, Apache, MySQL and PHP) to start up a container.
Containers - are the actual applications themselves produced from the images and dependency requirements that have been pulled. You can think of images as directions for scaffolding or the recipe and ingredients for baking a cake, the container is the actual cake.

Registry - is a private or public place where you can pull images down and run them, similar to NPM packages. Inside these registries you will find repositories.
Repositories - house all the previous versions of all the images, stored there.
Docker distribution- There are several images that can be run to use common operating systems. You may hear someone say they are running “Alpine” which is a common distribution for linux.
Hopefully this was a helpful overview to prep you for Docker and get into the containerization mindset. As always let me know if you have any questions.