Docker is a computer program that performs operating-system-level virtualization, also known as "containerization". It was first released in 2013 and is developed by Docker, Inc.
Docker is used to run software packages called "containers". In a typical example use case, one container runs a web server and web application, while a second container runs a database server that is used by the web application.[citation needed] Containers are isolated from each other and bundle their own tools, libraries and configuration files; they can communicate with each other through well-defined channels. All containers are run by a single operating system kernel and are thus more lightweight than virtual machines. Containers are created from "images" that specify their precise contents. Images are often created by combining and modifying standard images downloaded from repositories.
Source: Wikipedia
An image contains several parts:
This is a major difference with a standard virtual machine. The whole appliction is ready to be used. No setup required anymore, just run it.
When you have an application that needs a complete stack, like the LAMP stack, you can combine images into one application. Each image should have only one task. This allows upgrading the database for example without touching the webserver.
Each image comes with its own version number. You can either deploy a specific version, or you deploy the "latest" version. When you have deployed the latest version, restarting the container will automatically download the latest version and start it instead of the one you just stopped. If you have a lot of servers running the same software, containers take away a major part of the upgrading process.
Using volumes you can extract the data inside a container to the host. An image does not have a pre-allocated amount of diskspace like a VMWare image for example. A standard image can be as small as 1 Gb, operating system, libraries, applications, ... all included. For downloading, this is a breeze.
An image is created using scripting. This means that you can test and validate the image. The next container will be exactly the same as it is based on the same image.
When you need a new version of the image, you take the original script, make modifications and you build it again. No human interaction when creating the image.
An image is typically layered. For example:
Docker is so intelligent to see where the changes are in these layers so that when you deploy a new version of your application, Docker will only need to download the third layer. Optimizing downloads and deployment time even more.
This website runs in a Docker container, of course. The container uses several layers
In the future, all images using the image with Oracle's JVM must only download the layers following that one.
When you're running an application, most likely you will have some backup solution for when 1 server goes down. Kubernetes offers a way of scaling your application over multiple instances of an image.
At this moment, Asynchrone is not yet offering servides on Kubernetes. Asynchrone offers only services when there's enough expertise on it. Currently, this is not yet the case with Kubernetes.
Creating an image is coding. It's basically writing a source file that tells how the image will be. Coding is what Asynchrone does best.
Often the "official" images created by the vendors provide the basic solution. For example: one webserver with one database for WordPress. However, in a real production environment you will most likely want this solution with a load-balancer, multiple webservers and a clustered database with monitoring using Elastic and Kibana. Asynchrone can help you build this kind of production solutions.
Asynchrone owns the domain name dockered.com and the users "dockeredcom" on GitHub and Docker Hub. In the future, Asynchrone will publish it's open sourced images on GitHub (sources) and Docker Hub (images). The website https://www.dockered.com will be used for the commerialized images.