I recently decided to finally learn Docker. This post is a summary of the three online courses I took:
The answer is the apps. In 2008, Apple launched its App Store with 552 apps. Today, there are around 5 million apps to download between Google’s and Apple’s stores. Apps have changed business and digitalized our lives. We want the apps always running and responding. That’s where Docker excels — application containerization and deployment.
According to the official website, Docker is an open platform for developing, shipping, and running applications in a loosely isolated environment called a container. Before Docker, app deployment has relied on virtual machine technology that tries to virtualize the server infrastructure. Since each virtual environment has a full copy of the host operation system (OS), VMs incur a lot of overhead beyond what is being consumed by your application logic. However, Docker aims to virtualize the OS. It can run several containers at the same time that share the kernel of the host machine. It’s much more lightweight and fast compared with VMs.
Docker Architecture Design
Docker uses a client-server architecture. As the image below shows, the client sends instructions to the Docker daemon using the REST API. The Docker daemon is part of the Docker Engine, which does the heavy lifting of building, running, and distributing your Docker containers. The docker daemon could be your local computer or another machine on the cloud. All the instructions are specified in an image, which becomes a container in run-time. If an image is not locally available, the Docker will search and pull it from the Docker.
What’s an image?
An image consists of layers of read-only files and relies on a JSON file called image Manifest to maintain these files’ order. When you try to modify a file, Docker will copy it and create a new layer on top of the old ones. Thus, the previous layers are untouched. When building the new image, Docker will use the previous layers’ cache and make the whole process fast and lightweight. The images become containers in the run-time.
Key concepts of container
- Each container is isolated from the rest. When you stop a container and restart a new one using the same image, the changes you have made will be gone. To share files between containers, you need to use volume mounting.
- Each container has its network. If not specified, the default docker0 network will be used. To access the application inside a container, you need to publish the service to a container port and link it with a local host port.
Ready for some practices?
Here are some simple steps to deploy a simple web application on your local computer. To install Docker, please follow the instruction on the Docker get started page.
- First, open your terminal and run
git clone https://github.com/rafaelymz/streamlit-k8s.git
- Then type
docker build -t st. The
-tstands for tag name. Here we are building an image and assign the name
- Now you can type
docker run -d --rm -p 8501:8501 st. The
-dmeans detached mode (run the container in the background),
--rmwill remove the container once it stops,
-p 8501:8501publish the port 8501 inside the container to the host network port 8501.
- Now open your browser and enter
localhost:8501. You should see the app running, as the picture shows below.
- Finally, run
docker stop $(docker ps -q)to stop the running container and free up resources.