Docker I - The introduction for beginners | ~/sysdogs | Skillshare

Playback Speed

  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Docker I - The introduction for beginners

teacher avatar ~/sysdogs, sysdogs

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

72 Lessons (2h 24m)
    • 1. I. Introduction

    • 2. II. Agenda & requirements

    • 3. II. Agenda & requirements

    • 4. III. Basics

    • 5. III. Basics - Definitions

    • 6. $ docker desktop

    • 7. $ docker daemon - installation

    • 8. $ docker info

    • 9. $ docker run

    • 10. $ docker run (5x)

    • 11. $ docker run -d hello-world

    • 12. $ docker run -d nginx

    • 13. $ docker run -it nginx bash

    • 14. $ docker run --rm

    • 15. $ docker exec

    • 16. $ docker pull

    • 17. $ docker start / stop

    • 18. $ docker start / stop

    • 19. $ docker start / stop

    • 20. $ docker diff

    • 21. $ docker rm

    • 22. $ docker cp

    • 23. $ docker container prune

    • 24. $ docker inspect

    • 25. III. Basics - The problem

    • 26. IV. Buildtime

    • 27. $ docker tag

    • 28. $ docker rmi

    • 29. $ docker build .

    • 30. $ docker build -t centos-unzip .

    • 31. $ Dockerfile - EXPOSE

    • 32. $ Dockerfile - LABEL

    • 33. $ Dockerfile - RUN

    • 34. $ Dockerfile - \

    • 35. $ Dockerfile - ARG

    • 36. $ Dockerfile - COPY

    • 37. $ Dockerfile - ENTRYPOINT & CMD

    • 38. $ docker load -

    • 39. $ docker history

    • 40. $ docker history

    • 41. $ Dockerfile - Python

    • 42. V. Runtime - Isolation

    • 43. $ docker logs -f

    • 44. $ docker run --cap-drop

    • 45. $ docker run --memory

    • 46. V. Runtime - State

    • 47. $ docker volume create

    • 48. $ docker volume usage

    • 49. $ docker volume directory

    • 50. V. Runtime - Config and secrets

    • 51. $ secrets.txt

    • 52. $ docker run -e MYSQL_EMPTY_ROOT

    • 53. $ docker run -e MYSQL_GENERATED_ROOT_PASSWORD

    • 54. $ docker run -e MYSQL_USER

    • 55. VI. Networking

    • 56. $ docker network ls

    • 57. $ docker network none

    • 58. $ docker network host

    • 59. $ docker network create bridge

    • 60. $ docker network create host

    • 61. $ docker network create null

    • 62. VII. Compose

    • 63. $ docker-compose up

    • 64. $ docker-compose.yaml

    • 65. $ docker-compose volume

    • 66. $ docker-compose flask

    • 67. $ docker-compose down / up

    • 68. $ docker-compose mysql

    • 69. $ docker-compose network

    • 70. $ docker-compose locust

    • 71. VIII. FAQ

    • 72. $ docker host security

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.





About This Class

This training is an affordable way to get not only acquainted, but also comfortable, with Docker - the  most popular modern containers solution on the market.

Attendees will:.

  • Learn the fundamental theory related to the containers.
  • Learn the all necessary definitions,
  • Understand the basic definitions of containers.
  • how and when to apply containers in their work.

Hands-on approach is the key here - each course chapter provides precisely tailored practice laboratories, which will allow the participants to, personally put their newly acquired knowledge to test, gather precious experience, properly understand and further solidify their new skills.

After this course, you gain a fundamental knowledge of containers, you will know how to use it in your projects immediately.

Meet Your Teacher

Teacher Profile Image




Class Ratings

Expectations Met?
  • 0%
  • Yes
  • 0%
  • Somewhat
  • 0%
  • Not really
  • 0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.


1. I. Introduction: Welcome to Docker I - The introduction to proper container workflow. My name is Kamil Zabielski, I'm the CEO of sysdogs - a company with huge focus on high-quality DevOps and DevSecOps services. And I'm honoured to be your host. This training is a very comprehensive guide to the containerzed world - dedicated to beginners. We will take you through all of the intricacies of the containers. Taking you from ground up and making you a person with very solid fundamentals of the containers. We will cover security, scalability, container characteristics, ephemerallity. what doesn't mean not containers tend to disappear and way more. This training is one hundred pure practice, laboratories and knowledge. So, do not try to take everything at once. After this course, you will be fluent with Docker containers. You will understand the terms of stateless and stateful applications. You will know what are the responsibilities of registry, container image, volume, and understand the use cases for the containers from developer and operator perspectives. Even though this training is for beginners, there are few requirements you have to meet before joining this course to get the most value out of it. Let us discuss the agenda and the training prerequisites. 2. II. Agenda & requirements: Although, we said we will transform you from the person with zero knowledge to someone with huge understanding - the basic Linux systems terms and fluency in the terminal is extremely important. We will not cover basic definitions native to Linux systems like what is the service, we will not say anything about basic commands like: apt or yum. We will also not explain the basic terms from networking topic like TCP and UDP protocols differences or VLANs. Knowing all of these will give you the most value from this specific training. And we highly recommend you to get this knowledge before taking part. 3. II. Agenda & requirements: In the basics, we will discuss the fundamentals of the containerized world, covering the most important definitions like docker daemon, docker client, container image, container registry, and the volume and the container, per se. We will point out the most important differences between virtual machines and the containers. And we will explain the problems that are willing to be solved by the containers technology. And we will dive into practice straightaway starting from the first chapter. In the build time, we will talk a lot about container image. We will ask the question: What is an image? Both from technical and practical perspective. Focusing on Dockerfile and its practices, we will discuss possibilities to build an image from the containers with the examples with Python, Go and Java. Or we will build an image with completely different tool-like Packer. Next is runtime. Container runtime characteristics is extremely important and fundamental knowledge. We will cover ephemerality. the difference between stateless and stateful applications, explain isolation layers, control groups, namespaces, capabilities. And then we will dive into configuration management and the secrets injection into your application. In the networking chapter, we will discuss every of the basic network drivers. So bridge, none host, and overlay. We will show the difference between all them and we will show you how to interact with them and use them in your projects. In the Compose, we will show you how to containerize the whole projects. We will build a stack of dependencies, based on a real container-based tires we application. We will cover Docker compose tools and its file. We will point the most common problems with it and really explored problem of dependency hell, and possible ways to avoid it. In the premium chapter, we will try to answer on the most frequently asked questions, such as: "Why running sshd or crond mostly does not make any sense?" Or: "How can I SSH to the container?" We will show you sample use cases for Docker, how to use Docker in your project. And we'll also get a very short introduction of the security of the host. 4. III. Basics: In the basics, you will get the primary understanding of the basic definitions. We will present problems that are meant to be solved by the containers. We will go through all of the necessary installation process and we will guide you through the very first steps. Let's dig into the biggest advantage of the containers - operating system. It is often said that the biggest advantage of the containers is that Docker could be run literally on any operating system, starting from Mac OS and Windows, using Docker Desktop for Mac and Windows solution to its native environment, Linux. Although there are many differences between the internals of the workflow... And through this training, we will build and focus on Linux-based solution. Although you can still replicate all of the laboratories, almost all of the laboratories, on the operating system of your choice if you wish. Why Linux? We want you to understand the installation process and the understanding of the internals result in way more efficient workflow. In the end, we want you to dig into practice and dig into the internals. Let explain the difference between virtual machines and containers. Understanding this part is extremely important or even crucial... for the next lessons. In the standard environment, the lowest part is always physical infrastructure. On top of the physical infrastructure, you will find the hypervisor operating system. The hypervisor consumes resources only for the operational work. The responsibility of the hypervisor is to run VMs. These machines run their own kernel and costume, their own resources, CPU and RAM, only for the operational work. On top of them, you have dependencies, libraries that are necessary to run the very specific application. And then, last, we have the business value - application. It is extremely easy to see that the guest operating system is just an overhead on the business value. If we need more and more instances of the application, we need more and more virtual machines and therefore, the overhead growths and natural consequence, we waste more money, we will waste more resources. Moreover the configuration of the virtual machines, of course, is another pay One virtual machine has completely another set of libraries and dependencies than the other, the solution for this overhead is... containers. We no longer need guest operating system or application just runs on top of the Docker engine. As we are able to put more and more containers into one machine, we have to find out a way to scale the resources in the VM, per se. And then we have the hybrid environment. We can still use you to machines as the abstraction layer. And the way to scale resources is just scaled the VM. There are few things you have to remember, as you can see, app1 and app2 run on top of the Docker engine. In the containers we should not care about VM they run on top of. We should not care of application runs on top of VM1 and VM2. The next part is that app5 and app4 run on top of the Docker engine and the naturally the same kernel. And the last one is that containers, that represent application, may just disappear and this is natural. Let's summarize our thoughts. Every application from the container perspective, run on the same host and use the same kernel. VMs not. VMs have their own kernel. Containers are ready to be removed. VMs mostly not, containers as they do not provision their own kernel scale and run faster than VMs. 5. III. Basics - Definitions: It's time to learn all of the basic definitions we are going to use for through this course before we dive into deeply. The most important part is Docker Daemon. The Docker daemon is a service that manages containers, images, and volumes. Technically, Docker daemon is nothing else than just an HTTP sever, a RESTful API one, that manages all of that on the host. From the perspective of the Linux system, Docker daemon is just another service like sshd or crond. The very specific service that manages the data under /var/lib/docker and runs the processes and containers. But it's still just an HTTP API. When talking about host, we mean either virtual machine or any other operating system with Docker daemon service installed and running. So as you already know, that daemon is just nothing else, just HTTP service. It would be nice to have an easy possibility to connect and use this service. A Docker client is a commandline utility that implements demons API and uses it. Image. As we said daemon manages images. What is an image? From the technical perspective, the image is just an archive with all of the binaries, dependencies and everything that is necessary to run your application. From the abstraction layer standpoint, I would think about the image as the binary version of the application that is ready to get the configuration parameters. An image has name and tag. We may have many images with the same name, but we cannot have, locally, many images with the same name and the same tag. Naturally, you may have many applications in many different versions. A container image is just a binary version of the application. And as we need the sever a kind of datastore with lists of available images and their versions. Later tags. Here, the Container Registry comes into action. A container registry is just an HTTP server with list of images. we can push and pull. The main responsibility of Docker registry service is to give possibility to upload (push), or download (pull) images from or to host. One of the most popular registry is of course, Docker Hub. In practice, there are many registries. Starting from self hosted solutions like Portus, being authentication and authorization frontend, Harbor, Gitlab Registry, cloud-based solutions like ECR or DigitalOcean Container Registry. Of course, responsibilities and functionalities and possible ways to integrate these services significantly different between each other. From allowed automated security scans, the other allow to match the environment with deployed version, easy rollback. And also give you the only authentication layer, or given you the possibility to make a digital sign or security scan. We already said that an image is just a binary version of the application. So now we have to say: What is the container? If someone asked me to compare it to the similar relation in the object-oriented programming, I would say that a container is an object created from a class image. A container is a derivative of an image. Naturally, we may have many, many objects from the same single class. Those objects do not have to be the same, they do not have to be equal. They may vary between each other depending on the runtime parameters. But the most important part is that the source of all of these containers is the same - image. Speaking about container lifecycle, we will use two terms: Buildtime and runtime. Personally, I really like these names. They exactly define the responsibilities of the workflow. Buildtime focuses on building an image, whereas, Runtime focuses on running the container. As you know the definitions and the theoretical terms. Let's do some laboratories to understand the topic better. 6. $ docker desktop: The installation process for Docker Desktop for Mac and Docker Desktop for Windows is extremely easy. If we speak about installing Docker, Desktop for Mac you just copy to applications, run it, provide proper privileges. And in the end, we should get a Docker icon. And as a result, we are able to enter our cli, our terminal type `docker` in our terminal in the `docker ps` as client. If we speak about Docker Desktop for Windows, it's the same. So in the end we got an icon and the installation process is actually extremely easy. It automatically connects, connects the client, and the demon. 7. $ docker daemon - installation: As I said, it is extremely easy to install Docker for Windows and Docker for Mac. It might be a little bit complex if we speak about Dhaka installation for the slumbered immune to distribution. Let me guide you through that. So the first step is not. If we take a look at the installation process for a boiling tube, we uninstalled the old versions because the default repository is contained pretty old versions of Docker. And then we update packages, install dependencies at prerequisites that are going to be used in next steps. Later we add repository and install the package. So let me guide you through all of them pretty quickly. So the first is not. Let's update all the packages. And the next step is to install C8 certificates, opt get, et cetera, et cetera. Take a look or vault in my in my installation is installed. The wind out without any problem. The next step is to all the key. So we download the GPG key from It's okay. It's solid that we can verify if this GPG keys pregnant, so ob key fingerprint Weekend sees on its present. The next step is to Aldi repository. So we will, under regulatory awesome, it will add the d, d, it will update the packages one more time. And the last step is to update into install C client locally Island or darkest e. But this is not the end. So even if we updates aren't, we may see some problems or related to the configuration, either configuration. So let me guide you through that as well. As I said, Doc Lehman is nothing else than just HTTP server. So let me escalate my privileges. To escalate the privileges, you have to have CDO configured properly. And the first thing you have to verify is that you have a group called dot-com, and you use that as a member of a group called token. This is the group. And this group exists. So if this group exists and our user is part of this group, Volcker demon will create a socket with Docker, Docker ownership. If we take a look at viral around located or socket, we can seize up the socket is owned by root and the group. It's important because if we are members of talker, we will be able to connect to the socket. If we type system CTLs top doc ID. The socket is talked about it who'll be activated as soon as someone will try to reach Docker dot socket. And if we type docker ps, all these system the automatically provisions to service tells the service. And of course we are able to access it. It's important to verify that you have a group called called Docker. And you are the member of this group. If now all he would proudly need to run all of the permissions on the route or does not make love offense. Of course, running PS, connecting to the host from the easier is the same like you would be root. This has not sure, but it looks better. The last verification is that you can type Docker info and you should get the information about the server and get the information about the client. 8. $ docker info: If you want to troubleshoot `docker daemon` and `docker client` connection, you type `docker info`. Within Docker daemon info you can see many important information like: What is the context of the clients what is, its context; what are the plug-in and what versions? What is the buildx / buildkit? And from the server you can see the server version, the storage drivers that are available. What kind of plugins? Volume, networking, logs plugins, so the drivers that could be configured in the Docker Daemon, et cetera, et cetera. So to troubleshoot the connection between the client and the server and to see possibilities. what kind of server gives you, type: `docker info`. 9. $ docker run: It's time to run your first container. If you want to on a container, you type `docker run` and the argument for `docker run` is an image. So, as you can see... the usage, is that . you put options and the image, as we said, container comes from an image. So we would like to run a container that comes from image `hello-world`. And as you can see, we were unable to find an image `hello-world` locally. So it was pulling from the `library/hello-world`, as we know, what is registry... It was pulling from DockerHub, the default registry, for our demon... It was successfully downloaded. And the only purpose of this image is to actually `echo` this information. "Hello from Docker!" This messages shows that your installation appears to be working correctly. So, we were able to connect the client to the demon, pulled the `hello-world` image, the demon created new container, and this container outputted the following information to our terminal. 10. $ docker run (5x): Naturally we can run many containers from the same image. We can run the same command, `hello-world`. Many times. One, two, three and the four. And if we type `docker ps`, we can see that none of these containers are visible here. And we run them. They are not available here because they are not running. If we type `docker`, `docker ps -a`, we can see that... ... we have five containers, that were created and exited a relatively 13, 14, 15, 16 and one minute to go. So taking an image `hello-world`, we were able to create five different containers, but they all come from `hello-world` image. 11. $ docker run -d hello-world: We run a container helloworld, but we can also run container in a detachable (detached) mode like in background. If we type `docker run hello-world`, we are running this container in a foreground. So, if it is the same like we will run a process like `ls` or whatever, and wait for the feedback, and wait for something. We are able to run the containers in detachable(detached) mode so we can run the same container, `hello-world` or other, another container from `hello-world`, but in detachable(detached) mode. And now what you can see is that the only output we got is this ID. If we type `docker ps`, we can no longer see because this container, as the containers in foreground, were, just, exited and shown this information, but if we type `docker ps -a`, we can see that `5e48` is long name of this container `fe48`. So if we detach the container, it runs in background, it will, it will be like you would run the container into background without waiting for it. 12. $ docker run -d nginx: A way better example to run a container in the background is `nginx`. If we type `docker run nginx`, we can see that we are unable to find an image... `nginx:latest`, locally. So, we are pulling `nginx` from the registry. It's docker hub. And as you can see, we have some information that completely difference than `hello-world`, because they are two different images. But it didn't exit! We do not see any prompt info. We do not see any information. What's happening. If we enter the next terminal. If we type `docker ps`, we can see that this container is up for 25 seconds. So we created a container `nginx` acts like from `nginx`, a container `d2d`, and it's up for 27 seconds or rather, it was created 27 seconds, it's up port 25 seconds. If we type `Ctrl+C`. So if we type `Ctrl+C`, what we would normally do in the Linux terminal, so close(kill) the process. We exit the process, we send a signal to the process. If we type `docker ps`, we can see that this container is no longer up and running. What's happening? And this time, what's happening is that we are `nginx`. so the demon in the foreground, that this is not the common way. We just normally put an `nginx` in background. You want to `nginx`, to running. So we want to make it up and running. And do not wait for the feedback(!), you want to make it running in the background. This is what `-d` does. And if we type `docker ps`, we can see that it was created 14 seconds ago and it's up for 13 seconds. 13. $ docker run -it nginx bash: As we know how to run containers, it is good to say that it is not necessary to run. that command is put, is put into the image (as default). Take a look. If we run `docker run nginx`, we would expect is that the `nginx` container or rather the `nginx` process would be spawn up. But, at the same time, we can run just `bash`. What happened? Because we did not allocate an interactive shell and we did not allocated any tty, so we didn't attach to this container. We were unable to see an anything, so the container exited. If you type `docker ps`, you can see is that... Ok? It exited 20 seconds ago. But at the same time, if we put `docker run -it`, SO put the container into the `interactive mode` and allocate `tty`, we will say `nginx` and type `bash`. We run a container taken from `nginx` image, but instead of running `nginx` process, we run `bash`. If we type `docker ps -a`, we can see that 3 seconds ago... the following container... `/docker-entrypoint` created 15 seconds ago. If we inspect this container, we can see that the command was `bash`, It was not the `nginx`. 14. $ docker run --rm: Containers could be removed automatically as well. So if you run docker `hello-world`, we can see that up `hello-world` container, like the previous lab was running. But we can also make this container, so we type `docker ps -a` and we can see is of this container `hello-world`, was, it is the last one and it exited in nine seconds ago. But, if we type `docker run --rm hello-world`, we can see that this container, of course, started, it shown from the same output, and if we type `docker ps -a`, we no longer see a new container here. This container was automatically removed after it did its job. 15. $ docker exec: How to enter the container? Of course, as we said, we can run container with a `bash`, but that will not cause us to enter the container. It will just make us create another container and run a `bash` in this container. So what we want to have, is, we wont to having the containers like this one. So `316` We want to enter that. And the useful command is `docker exec`. `docker exec` runs a command on any running container. And what do we want to do is we want to execute command in the running container within the interactive mode, allocating pseudo-TTY. And we want to allocate all of that in the container `316`. And then we put any other shell we can use `bash`, we can use simple shell. I like `bash`. So I have allocated `bash`. Of course, this process has to be, like this shell, has to be available. So for example, if we would want to run a `fish`, this `executable` was not found because the following file was not found. But normally either `sh` shell or the `bash` shell is available. 16. $ docker pull: Previously, we were pulling images because we were running a container. We wanted to run the container. But this is not necessary to pull an image, to download an image from registry. If we type `docker pull centos`, we will be able to pull the `latest` tag of `centos`. So the `centos` image with its `latest` `tag`. And if we type `docker images`, If we can see that we have `nginx` image, it was created, three weeks ago, `centos`, eight weeks ago. `hello-world` was created thirteen months ago. At the same time if we are allowed to put an image, we can just use `push` command. So either `pull` or `push` or your friends related to `pulling` or `pushing` the image from or to registry. 17. $ docker start / stop: Containers can be started and stopped. So the previous containers, as we can see our well, actually that Exited, Exited, Exited. But this one is for... 34 seconds. If we type `docker ps`... We can see so this container is up for 46 seconds. If we would type `docker stop`, and the either ID... or the name, the name is distinct on the host, and the ID is distinct on the host. Normally we would want the ID two distinct on the cluster, but they are both distinct on the host. And if we type `docker stop`, we will stop the container. If we type `docker ps`, we can see that normal containers are up and running. If we type `docker ps -a`, we can see out this container exited with `0` exit code - six seconds ago. So that was cool. And we can start it once again. So we can look. We type `docker start`, the ID of the container... Look at PS. And we can see is on this container is against taught it so competent Jameson is started and it's up for three seconds. 18. $ docker start / stop: Containers do not share any information between each other. Let me show you that. We have two running `nginx` containers, we have one that was started 36 seconds ago, rather 37 and it's up for 36. And the other is up for 22 minutes. Let's enter the last one. So let's enter the container `642`, within the `bash`, update packages, install whatever. Install `procps`. `procps` is the package that delivers `ps`, the standard tool to troubleshoot processes. Tybe `ps aux` in the container `642`, we are able to list processes. Let's enter the second container. Let's enter `316`. If we type `ps aux` command was not found. This command does not exist here. Even though both of these containers were created from the same image, they do not share the information. There is some runtime there is something that happened in one container at this not available for the other. They are complete separate processes, compete, complete separate things. 19. $ docker start / stop: The changes that happened in this container is persistent for this container. but only for its container(!). Take a look. If we type `docker ps`, we can see that the container `642` is up for two minutes. Like, it was created two minutes ago. And at `642`, you remember? We added `ps`. Let's stop this container. If we'd type `docker ps`, we can see that this container is stopped, exited `6 seconds go`. And if we started, if we start this container. And if we type `docker ps`. It started it two seconds ago. If we `exec` to this container, we can see that the `ps` is still persistent. You have to remember that this persistence will be kept as long as the following container will be available for the host. But it is not recommended to keep a anything on the container because the containers tend to disappear and it very natural they will disappear. 20. $ docker diff: It is extremely easy to see the difference between the image and the container. If we type `docker ps`, we can see running containers. And if we type `docker diff` and name of the container, so `642`. Remember `trusting_goldbar`? This is the container where we installed `ps`. We can see that there are a lot of differences! Take a look. We can see that we have some change. `C` means change, `A` means added and `D` means deleted. We have a lot of new binaries audit, a lot of changes in the directories, a lot of new files, lists. These files are, naturally, related to the `dpkg`, So, the binary that manages the packages in Debian or Ubuntu. But if we type the exact same command, so `docker diff 316`. We can see only the difference, related to the `bash`, because I executed some processes there and it's saved in `bash_history`. But the rest... is completely related to the `nginx`. So changes in the directory is related to `nginx` or created cached files that are related to `nginx`. 21. $ docker rm: Containers can be also removed. We can remove containers like we would normally remove the applications. If we type `docker ps`, we can see the one running container. But if we'd have `docker ps -a`, we can see the containers that exited at 5, 5, 7, 9, 9 seconds ago. So all of the containers we created. If we want to remove a container, we type `docker rm` and the ID of the container. As we can see, we removed container `5e48`, as we always reference either the name or the ID. So you reference the ID, at the same time, we can reference the name so we will remove `objective_ricci`. And it's fine as well. If we type `docker ps`... We can see is, that, still, the one container is up, it is still up. We didn't do anything with that, but we no longer see `objective_ricci` and `5e4`. 22. $ docker cp: Copy files from and to containers. So we can just use `docker cp --help`. `docker cp` is a commands that copies files between the container and the local system. So let's enter `nginx`. If we type `docker ps`, We can see `nginx` container. It's up for seven minutes. It's `29c`. Let's enter it. As we can see. `/etc/nginx/conf.d/default.conf`. Assume I would like to copy this to my host. I can just type `docker cp`. And as you can see, argument is to container and the source, and the destination paths, I have to put the container ID or the name and the path. So my container ID is `29c`. So `docker cp 29c`, the path is `/etc/nginx/conf.d/default.conf` I want to copy that to dot. And if I type `ls -la` on a type `default.conf`, I can see the `default.conf` file. 23. $ docker container prune: As you can see, we have a lot of containers that are stop, that exited. And it is, extremely easy way to clean it up. So if we type `docker container help`, we can manage the containers. Of course there is `ls`, it does exactly the same job like `docker ps` without `container ls`. But there is one useful command called `prune`. If we type `docker container prune`, This will remove all stopped containers. "Are do you want to continue?" Yes, we want to continue. And we have removed all stopped containers. If we type `docker ps -a`, we can see only the running `nginx`. 24. $ docker inspect: As we speak about the runtime, you would want to troubleshoot to container, To troubleshoot the container and its environment variables, or see the volumes it has attached. you type, `docker inspect`. And there's always a reference either the name or the ID. We referenced ID. And what we can see is that this container has a lot of information besides what it was created from, like what is an image, but we can see a lot more information, its state whether it was out-of-memory killed by the kernel or not, when it started and when it stopped, when it finished. What are the read-only path paths so the paths in the container that are in read only mode, what are the cgroups configured, so the period, the quota, the real time, the `swappiness`, related to the namespace, et cetera, et cetera, We can see the network configuration like what networks is the following container connected to, or environment variables that are available or put to this container. What command was executed into this container what entrypoint was executed within this container. All of this information is useful for troubleshooting. 25. III. Basics - The problem: As we know all of the required definitions, let's discuss the problems that are willing to be solved by containers. In the Linux operating system, we have a lot of processes running continuously... Starting with SSHD service and CROND service, service, moving through monitoring and logging agents, ending by HTTP and interpreter software. All of these processes have their own access to all of the mount points, share the same hostname and many other operating system parameters. They probably do not necessarily have to access. If we could somehow isolated the very specific process with some isolation layers, limit visible interfaces, change hostname only for this process, limit resources so CPU and RAM... we would undoubtedly increase the system security. And thats the security improvements achieved through container. With the containers, we can isolate the very specific process into subset of resources it sees. In the standard development lifecycle, we have many environments. We have a local development environment where every developer works. And then we have development environments where the very early stages of the software happened to be tested. (O'rly?) Then you have staging environment where the business probably accept the changes. (O'rly?) And in the end we have production environment where the real customers are able to access the product. At some point of time. we would like to promote the staging environment into the production or development environment to the production. This change has to be as smooth as possible. Of course, staging environment may have different configuration parameters, like Sandbox, API keys, different database... But there is exactly one thing that has to be the same. It's application binary version. We have to be sure that the deployment, so libraries, dependencies and the source code are exactly the same between these two systems. Through container images, we can easily promote a very specific version of the application between the environments without any risk of messing the binary version of the app. In general, every IT system needs resources. The more resources we need, the more we have to pay. And we pay for two kind of resources. We pay for the resources we actually use. Or we pay for the resources we are going to be using. The more resources we need, the more we have to pay and... in the worst-case scenario, we are going to pay for the resources, we will not use at all. As we know from the difference between VMs and containers, we have a less overhead on a guest operating system so that we are using money more efficiently because the single node can keep a lot of applications with very different versions. And thats the the problem of resources fulfillment. Let's assume, or a business has to round 24/7 each month, and we normally has a 100 customers. But we predict that in June, July, and August we will have to manage 3K customers a day. (The graph presents the real values in comparison to the business predictions.) Let us assume the same, but we have to scale from midnight to 0005. We have to scale ten times in five minutes. That's the problem of application scalability. Containers give faster provisioning and the better resources fulfillment gives you more possibility to scale within minutes. As a natural consequence of scaling up is startup time reduction. The containers start faster than VMs. So we are able to make our application ready to get the traffic faster. And of course, we are able to make the application delivery faster. It is useful, especially for hot fix. Continuous integration and... Continuous deployment systems were created for repeatable and predictable application releases. One of the most important factor of building an application, It's built unification. We would like to unify the process of building a release in exactly the same on developer machine and on continuous integration, continuous deployment system. With that... we can always test the release and run the test before the changes go to the origin and the source control version. Let's do some exercises to understand this topic better and dig into laboratories. 26. IV. Buildtime: In this chapter we will discuss the buildtime, We will dive into layers and the consequences of layer. We rebuild images from Dockerfile and using completely different solutions like Packer, will discuss commands and examples from Python Go and Java. Buildtime layers. The most important part of the image, besides the fact that images just an archive, is that images have a layered structure. A natural thing that comes into mind if we speak about layers is Dockerfile. Practically, a Dockerfile is a text file with list of commands that should be executed one by one in the very specific order, to get the application ready and baked. Practically, during the build, commands are executed one by one, creating an intermediate container between the steps. You can think of Dockerfile like a recipe for Docker image ... (not for the container!), There are many commands you can execute. You can run RUN, FROM, ENV, ARGS. All of these have different responsibilities. But the most important part is that each part of Dockerfile creates a layer. You can match command between the image and layer. Let's do some labs to understand the topic better. 27. $ docker tag: Can we rename the image or add in many names to an image? Of course we can. If we type `docker tag`, we can see that `tag` takes exactly two commands. The source of an image. So the source and to the tag, and to target of the image. So it creates the `tag of target image` that, refers to the source. Let's try. Take a look at `docker images`. We have an image called `hello-world` with its tag `latest` and it points to be `bf716`. Let's tag be `bf716` to `our-app:2.0`. `docker tag [...] our-app:2.0`. If we type `docker images`, we can see that now we have many images. We have many images, because we have `hello-word:latest`. And we `our-app:2.0` tag. 28. $ docker rmi: Can we remove an image? Of course we can. Take a look. Let's list all stopped containers. And we have four stopped containers. Let's go to the images. And let's try to remove an image. Our `our-app:2.0`, would it be possible to remove an image `our-app:2.0`? It's important to take notice that containers `c65`, `a6c`, [...] `a1a8`, `5b`, et cetera. All of these are taking the source `hello-world`. So its image is `hello-world`. Let's try to remove an app. `our-app:2.0`. It's untagged. Nothing was removed. Why? Because there is no image that is using `our-app` or rather, there is no containers that this using `our-opp` as a source. Let's retag it. So odd this tag once again. Let's run once again `our-app` with version `2.0`. So `our-app` with version `2.0`. Of course, our op is just `hello-world`, Let's say `docker ps -a`. And we have that `hello` was created five seconds ago and it exited four seconds ago. Let's go to the images and let's try to remove `our-app:2.0`. It untagged. Why? Because if we take a look at `docker ps`, there is no container, or rather the source of the `our-app` with version `2.0` available because the source of the image B F7 5-6 is still available BY F7 5-6 and its sources, of course, in each IDB F7 5-6. Let's go to Docker images and let's try to remove and let's try to remove all of these containers. I'll take his assault HelloWorld. So let's remove Doctor M F and remove all four containers because we're not. And the last one look EPS. And as you can see, only the image hello, waltz left. Let's try to remove an image, Docker images. Let's try to remove an image helloworld latest Docker, my Hello World, latest. It's impossible to remove an image because hello world is the only reference to be F7 5-6, and be F7. 5-6 is an image or other is the containers that we created a minute ago. We have the source, our app. So it does not necessarily removed the image. It will remove an image if it will be NAD. So if we point in ID and it will just on-target image. If we point a tag as thought is useless or is rather duplicated, There are many tags that are pointing to the same image. 29. $ docker build .: If we want to build an image, the first thing that comes into mind is Dockerfile. As we set Dockerfile is a recipe for our image. So let's try to build the first one. Dockerfile is a list of commands. And the first command you, probably, would put here is `FROM`, `FROM` is a commands that says: "My image, the base of our image is the following image." So if you're a PHP developer, you would probably put here `php`. If you're a Python developer, you will probably put here `python`. If you are a `Go` or whoever, you will put here your image as your base. This is not necessary. You can put as well `centos:8` as the base of your image. And you would like to create a sandbox tooling or whatever. Let's try to build an image from CentOS. So let's try to build the first image. that the only command is `CentOS`. To build an image we type `docker build`. And the argument for `docker build` is either `PATH` or `URL` or standard input. The standard procedure says, okay, the context of the `build` is `dot` - the current directory. If you change the placement for the `Dockerfile`, because he want to separate Docker logc, and your application logic... you still want to have this context of the current directory with `git`, but... you would have to specify the file name. So the name of the Dockerfile... where it is... because the default is looking for a PATH... So the context here... and a Dockerfile (at this context). So let's try to build an image. Take a look. "Sending built" "Build context to Docker Daemon" `FROM centos:8` `300e`. And successfully built `300e`. If we type `docker images`, we can see that `300e` is just a CentOS. What happened? Where does our image? Our image has been actually created. Because the only thing that we have specified here... is `FROM centos:8`. **Our image is identical to CentOS**. And they point to the same image ID. 30. $ docker build -t centos-unzip .: Of course, it is not naturall to keep that like that, we would like to have a meaningful name for our image. The same like we tagged images in previous lab, we can tag a built image. So, if we type `docker build` on the specified. parameter, and we say `centos-unzip`with version `0` (8). And we specified the context as the current working directory. Enter. We can see that we have successfully built the same layer, but it will successfully tagged with `centos-unzip:8`. And if we type `docker images`, that nothing has changed. We just have a name, a tag for an image to `227c5`. 31. $ Dockerfile - EXPOSE: Another useful command that, you want probably would want to use is `EXPOSE`. If we take a look at this Dockerfile, It's just `nginx`. If we built an image from this Dockerfile, we have successfully built `f6d. If we take a look at `docker images`, we can see that `f6d` is nothing else than just `nginx:latest`, Lets change this Dockerfile and not some exposure information. So, `EXPOSE`. Let's assume, we will want to expose `12/tcp` and `1212/udp`. We can specify as many ports as we wish. The structure is the following, Either port not followed by the `proto`. And it means that... By default, it will take `tcp` protocol or the, or the port with the protocol.... It means that we will specify, the specific protocol. I like specifying both! If we type `docker build .`, we have successfully an built image `0011`. If we the container from this image... we can see that we have successfully run `f5e`. If we type `docker ps | grep f5e`, we can see, that this container, is running and it exposed `80/tcp`, `1212tcp`, `1212udp`. Does it actually mean is up this exposure did something? That `nginx` listens on something? Absolutely not. If we enter and type `curl localhost:1212`. We can see that connection was refused. Expose is just an information between the person that built-in image and the ones that's going to operate... what kind of ports... what is the list of ports that this specific container is going to be listen to. 32. $ Dockerfile - LABEL: Another useful metadata information that you should probably add your Dockerfile is `LABEL`. Of course, `LABEL` does exactly what it says. It adds a label to the image. But this label is available for container that is run from this image as well. The most common label we ought is just `maintainer`. And it is just as the name says, a maintainer of the Dockerfile. But for large organizations, I can imagine many other labels, like owner of the project or project name or git-URL, et cetera, et cetera. Something that would be useful for the break the glass procedure and the, situation where we need any immediate action. Of course. Let me proved that the label is available for the image. So we will just inspect this image. And as we can see, we have a `` and the `maintainer` is me, as well. And at the same time, we have label that is available for the containers. Or if you run this container and we will inspect this `53d` container, We can see. We can see `` and the `maintainer` is available here as well. 33. $ Dockerfile - RUN: The next command you'll probably use is `RUN`. Let's take a look at our Dockerfile. Our base is sent `centos:8`. If we type `run`, we will create another layer with the commands that will be executed from this base. So if we type `yum -y`, `update` and `yum -y install unzip`, our `centos` image will be improved (!) with `unzip` tool. Let's build this image. Take a look. The first step... to build command is `FROM centos:8`. The second step to build this command (image*) was to update the package or rather update the repository install `unzip` package. If you want it. the second time. We will see that from `centos:8`. Works fine. and `RUN yum -y update && yum -y install unzip` is using the cache with the layer `227c5`. And we have successfully built `227c5`. If run in interactive, including tty, a container from `centos:8`. Let's take a look at `bash` and type `unzip`. We will see that command was not found. If we type `docker images`, we will see that, we have successfully created an image `227c5`, so `227c5`. So 38 seconds ago. So let run `227c5`. And `unzip`. `unzip` is successfully installed here. 34. $ Dockerfile - \: As we said, each line of the Dockerfile is a separate step. And probably we want to have many dependencies. Assume that we have on `unrar`, `unzip` and `gpg` and `nginx` and many others. This list could be really, really large and we need a simple way to split the line. So if we want to split the line, we just have a `backslash`. And now we have just `RUN` command. This is just the only one command (!), but splitted to two lines. 35. $ Dockerfile - ARG: And that's a useful command as you will probably want to use is `ARG`. Let's assume that we would like to have an easy way to change the CentOS version without changing the content of the Dockerfile. So we can just type, `ARG`. An `ARG` is the only command, that could be set earlier, than `FROM`, and then we specify the CentOS version. Let's set the default value to `8`. So that the default value for `ARG` `CENTOS_VERSION` is `8`. And then we will change the hard-coded CentOS version to this variable. Let's try to build this image once again. `docker build -t centos-unzip:8 .`. As you can see, the first step is `ARG`. The second is `FROM centos:$CENTOS_VERSION`. And of course it is installing `unzip` as earlier. We would like to change the version on the fly. So, we can just give this version using `--build-arg`parameter. If we type `docker build` and change CentOS. So target to `centos-unzip:8`, and change eight to seven. So we would like to build on the `unzip` with `centos:7` and put the `--build-arg` seven, so other `CENTOS_VERSION=7`. And put the context dot. Take a look. Nothing has changed from the perspective of the step. But we are rebuilding the whole image and as you can see, the list of packages, the list of dependencies is slightly different than the packages that were installed before in the previous laboratories. Let's wait for the installation. As you can see, we successfully built `f07`. If we type `docker images`, we can see that we have a `centos-unzip:7` and `centos-unzip:8`. The one was built three minutes ago, and the second was built nine minutes ago. Uh, Lets's run the container from `centos-unzip:7`. So `docker run -it ...`. `bash`. And if we type: `type unzip`, we can see than all unzip is present as `/usr/bin/unzip` and if we cat, `/etc/centos-release`, we can see is that the CentOS release is `7.9`. Let's enter the second container, an image. Under the same `type unzip`. We have `unzip`, `cat /etc/centos-release` and we can see is a CentOS release is 8.3. 36. $ Dockerfile - COPY: And mostly useful command you will use very, very often is `COPY`. `COPY` copies the data from the context... to the image. Take a look. If we open the Dockerfile. Dockerfile consists of dependencies, our image, our container needs to be run. But normally you would like to put a source code there. The vendors we installed... the dependencies... that are strictly related not to the context of the operating system... but to the context of our application. (Composer, Pipfile) So let's try to copy `test.txt` directory `/tmp` in the image. If type right now `docker build .`. We can see that file was not found. `test.txt` was not found or, it was excluded by `.dockerignore`. Let's create `test.txt` and let's put a `testtest` here. Let's try to build it once again. If we type `docker build .`, we have successfully built image `dcc1`. Let's run the container from this image. If we enter `/tmp` and type `ls`, we can see is that we have `test.txt`. And the test.txt, its content is `testtest`. If we change the content of the file, `test.txt`, here, on the host on and we add here `123`, And we will run it once again, a container. We will see that nothing has changed. `testtest`. The image is persistent. So if we do not rebuild a in an image once again... the every container created from `dcc1` will have the content `testtest`. Let's rebuild it. We have rebuilt it, And now we see that, we have a new build. `d510`, if we run it once again... `dcc1 we are able The content of it... is of course `testtest`. But if we run the container from `d510` and retype `ls -la `/tmp`. And we cat the text in `/tmp`. We can see that we have. We have `testtest123`. It's important that whenever you change the file in the context, you have to rebuild (!) to make it happen, to make it visible to the image. 37. $ Dockerfile - ENTRYPOINT & CMD: And the two most important commands you would probably want to add to your Dockerfile is `ENTRYPOINT` and `CMD`. The general rule of thumb, I want you to remember is that. `ENTRYPOINT` specifies the first commands which should be executed. And the `CMD` gives you the default parameters for this command. This is the preferred way to work with that. So let's try to change, this empty CentOS image, to the image that by default `pings`, and it will `ping` by default, ``. So our `ENTRYPOINT` is `ping` because we want to by default execute `ping` command and our default parameters should be ``.. So let's change it to ``. Now, lets build it. So `docker build .`. And as we can see, successfully built for `f6`. Lets run `4f6`. `docker run 4f6`. As you see, we are, without giving any information from us like what, what's the command should be executed, We are pinging. Of course. What it actually did it overwritten the default information in CentOS. So CentOS by default specifies empty `ENTRYPOINT` and its default command is `bash`. If we run it once again. And we'd assume that we would like to ping `` not ``. We can do that by giving just the parameter results, specifying `ping`. So instead of putting the whole ping address, et cetera, et cetera, they whole `ping execution`, we just type the parameter here. If we'd type `bash`, like we would normally do so if we type `docker run -it centos:8`, `bash`, we can access `bash`. If we type the exact same command here... ... it will not work because name or service was not found, was not known. And what it means is that `bash`, as a name is not visible for `ping`. And because there is no such thing, like `bash`, Of course, we can run the same container, the same image, changing the `ENTRYPOINT` to something empty or `bash` and running normal `bash` commands. Of course it exited because there was entrypoint like an interactive and shell. So now we can enter the interactive mode and now it works. 38. $ docker load -: Is it possible to save or to import an image from an archive? As we said, it's not the image is just an archive. Of course it is. Let's pull an image called `hello-world`. The only image we have on our host is `hello-world`. So we have a `hello-world`. And what's created 13 months ago to save any image we'd typed `docker save --help`. And it saves one or more images to an archive. And by default, it is streamed to standard output. What we'd been not necessarily would wanted to. So let's save this image. So let's save an image to `image` just `tar.gz` and let's save `bf`. `bf756`. If we go to `file image.tar.gz`, we can see is that this is a standard POSIX tar file. Let's remove an image and let's turn to import it. If we type `docker images`, and if we take this image ID, we will be able to remove this image. If we want to load an image from an archive or rather to import an image from an archive, we type `docker load` and `docker load`, loads an image from `tar`-archive or std-input, and by default it takes std-input as well. Let's start `docker load -i`. And you will take an input and reads from tar archive file instead of standard input. And we will take `image.tar.gz`, our image was loaded and let's take a look. `bf656`, the same image I did, the same checksum was taken. Loaded and image. If we type `docker images`, we can see that be `f756`, the same image ID was saved. So it actually take the same image ID, the same layer we did. There is no tag and there is no repository because we... It didn't save it. We can of course, `tag` that seemed like we `tagged`. so we can take `bf756`, and take `hello-world:latest`. And if we type: `docker images`, we can see that we have successfully restored an image with the same kilobytes and the same ID. 39. $ docker history: There is a possibility to restore the whole Dockerfile. So to restore how the following image was built. Let's pull an image `nginx`. And if we would like to get the information, how the following image was built, we type `docker history`. As we can see, reading from bottom to top, we can see some files were added, then we changed the `CMD`, configured LABEL, odded environment variables, copied some files, configured `ENTRYPOINT`, configured expose. So the ports, what kind of parts are willing to be exposed by this container, configured STOPSIGNA and run the `CMD`. So if you want to restore the history, the way how the following container was built, `docker history` and the name of the container, rather than the name of the **image**. You can, of course, point the image ID as well. So if you'd point the image ID, that would work as well. 40. $ docker history: In the theoretical part, I told you that you could literally match the command from Dockerfile to the layer. And someone could think about that... this is just one to one relationship, that each command creates the exact one layer. This is partially true... and they want you... to explain it in the lab. If we pull `nginx`, we can see, naturally that we are pulling five layers, 1-2-3-4-5. (Yay!) But if we'd type `docker history nginx`, we can see that there are layers than five, because this Dockerfile had way more steps than just five. And reading from bottom to top. Adding a file, adding `CMD`, like in previous labs. Particular, only some of the changes in this Dockerfile, actually, create a change in the file system. So if you ADD a `rootf` or creating a `FROM image`, it naturally changes the file system, because this is the first step. And the next, you configured this `CMD`, or add `LABEL`, or odd `ENV`. Or add `EXPOSE` information. that does not weight anyting. There is no weight for these steps. So if we pull only five layers, we pull only these five layers. that are actually weighting anything. That there is a weight, there is a change in the file system, without digging into `diff, et cetera, metadata information and the internal structure, it's important to understand that this is not one-to-one relationship. 41. $ Dockerfile - Python: Let me show you the whole process of building a Dockerfile for Python-based application. If we type `tree .` I can see that I have, `src/`, one directory, one file. Let me present you how I would build a stock for Python based application. This `` does nothing else and just, well, it is a Flask application that returns `Hello World`, nothing else. So let's jump into that. Let's create a `Dockerfile`. And let's the seams that our application has to run on `python2` Of course, deprecated. `docker build .` So I do not have `python:2` to image, So, I have to pull it from Hub, Docker registry (DockerHub). Well, lets wait for `pull`. Awesome. We successfully built, built image `68e`. Does this image do anything related to the logic? Absolutely not. If we type `docker, `docker run -it 68e`, sorry. We have a Python2 interpreter. This is not what we intended. So let's jump into `Dockerfile` and let's `COPY` the source of the application to `/var/www`. Let's rebuild it. Successfully built. Let's run this container. If type `docker ps`. Impossible here. Sorry. Here we have ``. This is the `` file we have prepared. Awesome. Now we want container image to by default on this application. I do not want to specify it at any point of their runtime. So let's do it. Let's specify the `ENTRYPOINT` to `python`. And let's specify, the default parameter for the Python, is`/var/www/`. Perfect. We successfully built `664`, `docker run 664`. "No module named Flask." Our Python2 environment within this container, do not have any Flask information. Let's dive into that. Normally, what Python developers do, is they save something like `requirements.txt`. This is the file that contains all of the requirements related to `pip`. So let's type `Flask==0.11`. And let's do the following. We will copy this `requirements.txt`file to `/tmp`, and then we will take it as a source for our `pip`. So copy, take a look. I splitted `requirements.txt` as a `COPY`. because if nothing from the perspective of dependencies changed, I do not necessarily have to `COPY` the whole source. So `COPY requirements.txt /tmp`. And then I will run a command `pip`, `install -r /tmp/requirements.txt` Let's rebuild the whole stock `docker build .`. Take a look. I got the information's that... "Sorry..." (Oooups. You really think everything works as expected for the first time?) There is an error. Double `=`. Rebuild. I go an information Python2 reached end of life, January 1st 2020. Please do not use it. Please do not... But at the same time I got a another information. "Please upgrade `pip`." We will do it in a minute. But we have successfully built image `66277`, `docker run 66277`. And we are listening, we're running. This is what we intended. Application is up and running. Awesome. Let's fix some problems with the Python2 the `pip`. Although, this application is legacy. So I have to have possibility to build it with `Python2`, I would like to have an easy way to upgrade Python version. So ARG, `PYTHON_VERSION`. And lets specified the version two. And I will substitute hard-coded Python version with the argument. Let's rebuild it. Oh yes, I can see step one, step two. Everything is cool, everything is nice and easy and we have successfully built `66277`. But if I would like to run `6277`. Works mice. And if I would like to change the build argument, so I would like to change the Python version. I just type `docker build` `--build-arg PYTHON_VERSION=3 .` And now I'm pulling `python:3` because I do not have `python:3` locally. And the whole stock will be built based on `python:3`, without changing the content of the Dockerfile. I didn't have to do anything to build a new image. `0f3`. `docker run 0f3` I can see that it's listening and works like a hell. Perfect. Let's jump into `Dockerfile` once again. And what would notice... Let's run it once again in detached. Let me show you something and `docker exec` the container `cf0`. Enter the `bash`. If I type `ps aux`, I can see that `pyton2 or, rather, `python` command is running because I, when I type `python --version`, I would obviously see `3.9`. But `python` is running as first `pid` as `root`. This is not necessarily to run... our `python` application as a `root` user. Jump into... And let's fix this problem. So first, we would like to add a user, application user, let's do it. So requirements perfect, this may change, but adding an application, a user, `useradd`, `-m` create home directory. `-U`, create group for this application user, `app-user` So let's create this `app-user`. And now what I can do is I can de-escalate immediately. I can de-escalate privileges to `app-user`. So I can just de-escalate the privileges to `app-user` or without any problem. And let's type `docker build`. without new version. Work on two. Dot. And as you can see, I de-escalated the privileges and running all of the necessary dependencies down below. Of course, I'm reinstalling. And as you can see, user installed, it installs requirements. Yep, awesome. Works. Built. If we enter here, so `docker run`. Let's type `bash` No such a file or directory. Why? Change the `--entrypoint` If we type `ls -la` we can see, that, right now, we are `app-user`. We de-escalated the privileges. If we go to `/home-app-user`, We can see that, we have `.local` or other `.cache`. we jump into `pip`, we jump into `http`. And... we can see the `cache` for the `http`. Like... If I see `pip freeze`, I can see that odd, `Flask` application is installed without any problem. Awesome. Perfect. So let's do one more thing. Let's jump into `Dockerfile`. So we have the `app-user`, we have `requirements`, we have `pip install`. I didn't inform the operator that it would be useful to `EXPOSE 5000/tcp`. Because, the operator has no idea about my application. I want him to be informed that this application, as it runs, it will listen to `5K`. It will look for traffic on `5000/TCP`. So let's rebuild it. Look, `docker build .`, as you can see, we read the limit blob, a bomb all about. So I don't like it. Of course we should upgrade. Let's update the default version. And moreover, up automatically upgrade of the PIP. Pip is not already problem. So PIP install, upgrade PIP. Let's break the line and install requirements. So we will first upgrades PIP and then install requirements from TMP requirements. Let's rebuild it. Also. One more thing I would op here is not, it would be better to install every sin at user level. So odd user here, and I'll user here. Python developers tend to use virtual environments, but fortunately or unfortunately to them, there is a need for virtual environment because every application, it will be running from the image and there is no need for creating envy and environments are so awesome. Lock around here. Perfect. We are listening. If we run it in detached mode, if we type docker ps, we can see that it's up for 1 second, exposed five times and port. We have this container. Let's enter this container. Firefox, we haven't application user with Python running obligation if we thought curl localhost doesn't work crew localhost 5 thousand. Helloworld. What could be done more and better? Of course everything, there is nothing that can be and, but we could remove dependencies like curl, No need for curve, no need for n ket. We could profile sec comp. We could, we could use the diestrus image as the pipelines. So we will avoid all of the dependencies related to the pies into because Python t, If we don't care RON, IT pies and to our other three because we are using three right now. Bosh. And if we type count ADC, release, result in an eight, or it's supposed to based on carbon two. So pi sin three images based on the event to probably we no longer needed them to. And we should take a look at the possibilities to well, completely avoid a good distribution, things like DZ file, type file, probably there is binary cold file. We no longer need these binaries. We just need to run our application. There are many, many things you could improve here. I'll maintain heirs of Git repository, et cetera, et cetera. Of course, we could add multistage buildings, so put every cinnabar of it, every cinnabar over to the multi-stage. And put all of the odd in the multi-stage put application user environment variable called pd requirements to something different, cetera, et cetera. Possibilities are countless. We cannot count every sin, but this is how you would normally start building an application Docker file with Python. 42. V. Runtime - Isolation: In this chapter, we will discuss isolates layers delivered with Docker. We will focus on container configuration, secrets injection, and possible ways to rotate the secret. We will cover container ephemerality and the topic of stateful and stateless applications. In the end, we will use volume. Let's dig into isolation layer is related to the runtime. As we said, containers are just isolated processes. There are three main isolation lays, namespaces, cgroups and capabilities. There are few things they intersect, they are related to each other. but for the purpose of this training, we will try to give you only the high-level overview and understanding their responsibilities. In Linux, we have a superuser, rather called: root. The superuser can do almost everything. Taking that, for some parts of the system, we would like to give the users possibility to execute something or to do something, but we would not like to make some root. In general, capabilities is just a way of dividing superuser privileges into sub-privileges, substance. There are many capabilities. We have possibility to bind the privileged port, mounted block device or configure the system clock. For this part, it is sufficient to understand that we can divide through privileges into subsets. And these divisions are related to the process. If you want to add or drop capability, we have the following command parameter. It is a good practice to drop every capability and to add only these that necessary. Cgroups. The other isolation layers is cgroups rather control group. In general, their responsibility is to limit resources that the following process (contaner) a can consume. We can limit CPU and RAM usage. Of course, we can do way more. But for the proposal of this training is to understand this part. We can add limits with the very human-readable parameters. And of course there are many of them, but these are the most important. And the last, rather we should present it as first one, is namespaces. As we said at the very beginning, processes have access to many system resources that are different than CPU and RAM. They have access to system interfaces. They share the same hostname. And namespaces is a way of limiting these. For example, we can set a completely different hostname for a container, then we have in Docker daemon. Let's do some exercises to understand the topic better and dig into laboratories. 43. $ docker logs -f: `logs` is another extremely important topic. If we type `docker run nginx` as you can see on the standard input (output)... or standard error... we got the following information. `/docker-entrypoint` is not empty. Will attempt etcetera, etcetera. In the containers we want our application to be sending logs information **not to any log file** because it is assumed, as we said in the definition, it is assumed to be removed... the container is assumed to be removed, but rather on the standard output or standard error. This is what Operators (K8s) manage and send it to any other logging management service. So it's easy to see a `log` if we type `docker run` because we type `docker run` and we see `logs` on the standard input(?) or standatd output. But if we put the container into the detached mode and we `docker run -d nginx`. We can see that the `nginx` is running... like the previous one, But how we can access these logs? There is one cool useful command. We just type `docker logs` and the following container ID or the following container name. The container name we are interested in. And what's important is that it took everything from the top to the bottom. But we sometimes want to listen, like we do with `tail -f` or whatever. There is a parameter `-f` that will do exactly the same like we do with `tail -f`. So it follows the file... it follows the container... and it waits for the log to present. Whenever they are present, they will be printed here. 44. $ docker run --cap-drop: Let me present you the first isolation layer, so capabilities. I prepared an image. whichs parent, its parent is `nginx`. And the only difference is that I installed `lib-cap-ngutils`. This will be useful to show capabilities, available capabilities. Let's go, let's enter, Let's run and build this container. Let's run the standard `nginx`. image. It is the detached. Let's run this one. We see configuration is complete, ready for start. If we enter the previously created containers. So `ob6`, `docker exec -it ob6`. And we typed `pscap -a`. We can see that we have the first PID with `root` and the command `nginx`. And we have `chown` `override`, `fowner`, `uid`, `gid`, `kill`, et cetera, et cetera. Let's try to limit the capabilities. So we have to add a command parameter to `run`. Let's remove this container and let's run once again, but, run it in `-it` and `--cap-drop=all`. It will drop all capabilities to `nginx-cap`. And as you can see right now, we have an emergency code. Operation is not permitted because we cannot `chown` `client_temp`. So add a capability. Now the containers started it, but creating a worker process exited with fatal code `2`. Why? Because the master creates a worker and creates this worker process with lower permissions. So it wants to make set `gid` and `uid`. So to make this process running on lower privileges. So let's try to add two more capabilities. `--cap-add` So `gid` and we can, assume of course, that it will want to do set `uid`. So `--cap-add`. `--set-uid`. Now it's running. Let's do it in a detachable mode. So detach it. Now type `docker ps`, and let's `exec 3314`, `docker exec -it ... bash`, And type `pscap -a` As you can see, we have only three capabilities. `chown`. `setuid`, `setgid` Does it mean it, this container will work properly? Of course not. it may face some other things like if we that `curl localhost` it will work. But for so applications we have to profile this application and see what kind of capabilities are required and what not. Some applications require making node. Some applications require binding at privilege support. Some applications require doing something else. Providing proper capability set is a way of providing the application. Mostly not interested for developers, interested the young operators. (And netsec people) 45. $ docker run --memory: Let's discuss about another isolate layer, so `cgroups`. `cgroup` isolates resources like CPU and RAM. So if we type `docker stats`, we can see that we have one running `nginx` container that is `missing_chatterje`, The `missing_chatterje`uses 3MBs of memory and its limit is 7GBs. Eight GBs almost. Well, that's me trying to run a another `nginx` container. But let me try this. Another container isolated the memory to 200MBs, so `docker run -d --memory 200M. nginx`. As you can see, your kernel and does not support swap limit capabilities. (:sad:) Or the secret is not mounted memory limited without this swap, This has the warning about my kernel. If I type `docker stats`, I can see that mine container. `mussing_arrybata` has 200 megabytes of and `missing_chatterje` has has seven gigabytes of limits. If we would enforce this container to consume these resources, this one would be killed. If we will consume 200. And this one will not be killed because it's limited to seven gigabytes. 46. V. Runtime - State: Let's discuss container ephemerality. We will explain what does it mean that containers tend to disappear. We will define a stateful and stateless applications. And to use the stateful applications, we'll use docker volumes. Let's start with the standard. The 12 factor up is a methodology for building the software. The 12 factor is stateless and share nothing. Share nothing means that the architecture, it is an architecture in which each update of the request is satisfied by a single node. And in general, no! nodes do not share any resources. All of the data is stored in separate baking service, typically database. It never assumes that anything is cached on the memory to perform a job. Let's assume we have an application. This application requires MySQL/Redis databases... and we require a datastore for images - typically S3 Bucket. At this point of time we have one container, we have a one instance (normal) of our application. Of course, at some point of time, we have to scale. For many reasons. We would like to avoid single point of failure. We would like to manage more customers and so on. So we have to add another container to our application. This is the same environment of the application, but it's just managed by the two containers. Now, the first container during the first request saves some data on the disk, and later... the next request goes to the second container. The second container, naturally, cannot access the data that was saved by the first container. Moreover, if this data is necessary to perform the request properly, that breaks the stateless (!). This container can never assume it has any data on the disk to perform the request property. Because the first container can just disappear and this is very natural. that containers tend to disappear. Of course, stateless does not mean that we cannot have any kind of state. It just means that the state has to be kept in datastore outside of the application. Some applications will never be stateless. For example, all of the databases can not be stateless. Because they are these baking services from the 12th Factor. Even databases with sharding structure like Elasticsearch may not be stateful (stateless*). They have to save the data. And this data is necessary to perform the request. And to achieve the statefulness. We use the volumes. We are past the volume to the container. As we said before, we do not carry about the host. And this is not important on which hosts the following containers is running. So the volume has to be delivered by this datastore. And this datastore is attached to many hosts. Not specifically the volume, but the data stories attached to these hosts. Of course, stateful does not mean that connot disappear. But contrary to the stateless applications. They have a volume attached to it to keep the state, so the container can disappear, but the volume cannot disappear. 47. $ docker volume create: Working with volumes. Probably if you did some lops previously with MySQL on runtime, you may see and if you type `docker volume ls`, a lot of volumes here. So if we want to remove a volume, we can just easily type `rm`. And now we can see that the following volume is in use, probably run the container so that is related to MySQL and you can no longer do that. So if you type `docker stop` and the ID of the container, this container should be stopped, and then you can retry to remove. It is not possible because you stopped the container, but the container is still attached to this volume. So if you'd like `docker rm -f`, so removed the container and then you will try to remove the volume, So `docker volume rm`. now you can see and you can removed the volume without any problem. Of course, we can at the same time, create a volume so if you type `docker volume create`, you can specify the drivers that you want to add. you can specify the label, So let's create the local volume, with the the name `blah`. And if we type `docker volume ls` you can see the volume `blah` is present. If you add a volume to your Dockerfile, this volume will be automatically created for the following mountpoint. We will not do it, we will just remove this volume, `rm blah`. And love dive into labs, how work with statefulness containers, stateful applications like MySQL, and how we can properly use the state to upgrade something or to work with something. 48. $ docker volume usage: Let me tell you how to properly work with stateful applications. I have downloaded MySQL 5.651 and MySQL 5.650. Let's run the `mysql` with version ` 5.6.50`. We have started normally the container `4aaa`, that is up for six seconds. And if we type `docker volume ls`, we can see is that we have volume. If we type `docker volume inspect` on this volume, we can see that this volume has its mount point, has its name. And it was created actually right now, if we take a look at the data here, we can see that the user cannot access it. This is natural. Let's escalate privileges to root. And now we can see that we can have here the standard MySQL data, binlog, data set performance schema. The standard data you would find at `/var/lib/mysql`. Okay, so let me show you how this how this volume is attached to the container. If he typed `docker ps` and let's inspect Docker container. So `docker inspect 4aaa`. We can see that this is attached to the network `bridge`. And we have environment variable configured like like, like, like above. And we have a mounted points. A volume with the name `6148` is the source is this data and the destination is `/var/lib/mysql`. Awesome. So let's try to upgrade server. So let's try to upgrade this data, although this created some with the MySQL version five `5.6.51`. So if we type `docker volume ls`, we can see this is volume. Let's enter and make any kind of state. So let me prove that this state will be persistent during the upgrade. So enter mysql, create database. I will called his database `ps`. `show databases;` We have `ps` database. Let's remove the container. Leaving the volume, run an answer and try this another other to be started with `mysql:5.6.51` instead of `5.6.50`. Or if I would be asked to proove out, we are working with MySQL version `5.6.50`, Of course, we enter and when we're logged, we can see server version `5.6.50`. So type `docker ps`. `docker rm -f 4aa` So right now I'm removing **removing** the container completely. And now I wont to attach the volume, So I will run another container, but right now, I will attach the volume instead of creating the new one. So I will attach the volume `614` at `/var/lib/mysql` to do that, I'll just copy this. Paste it, paste it. `docker run -e MYSQL_ALLOW`. True. MYSQL_ALLOW_EMPTY_ROOT_PASSWORD=0, volume (-v) `docker run` `-e MYSQL_ALLOW_EMPTY_ROOT_PASSWORD=true`, point of volume. at `/var/lib/mysql` So what it does, it says OK, mount currently, currently created volume at `/var/lib/mysql` run it in detached mode and run MySQL version with `5.6.51`. Enter. And if we type `docker ps`, we can see that we have one more container. It's up for three seconds. If we'd like `docker ps -a`, we can no longer see the previous started container. If we type `docker volume ls`, we can only see one volume instead of the two. And if we enter the container and we type `mysql`, we have successfully access server with `5.6.51`. And if we type `show databases,` we can still see `ps` database. This is how the stateful should be kept. You do not care about the container because the container could be removed and started to separate image. You care about the data that is kept in the volume. 49. $ docker volume directory: Of course we can naturally point the point (directory) the point where we would like to keep our status, instead of creating the volume. So if we type `docker volume ls`, we can see that we have still one volume, but at the same time we can run another container, Take a look. I will run another MySQL `5..6.51`. But instead of putting it the previous name and the local driver, et cetera, I will point to directory. It's `$(pwd)` its currently working and I will point `$(pwd)/lib-mysql` directory and make it mounted at `/var/lib/mysql`. If I type Enter, I have created to another container. Of course. `docker ps`. I can see my container. It's up for three seconds. If I will see `ls -la`, I can see that system-core-dump with the `root` is already here. I can see the system core dump MySQL and `lib-mysql`, I can I I wouldn't assume I can or I can mentor. I can enter `/lib-mysql` and I can see the data at `/var/lib/mysql`, Of course, with different ownerships because I can not access it as the user. But I didn't have to put the volume (!) if I type `docker volume ls`. I can, I can no longer see the local driver dot local, local driver. These volumes are useful if you have sockets. So for example, you have to share this or could be between the containers or something else. 50. V. Runtime - Config and secrets: Applications require configuration parameters. To enable or disable something. Applications require secrets.Lets discuss possible ways to pass the configuration parameters and the secret to the containers. Many applications require... sensitive information. Database username, passwords, application salts hashses, access to external APIs and SSL or TLS keys. The naive, ot insecure method of putting the secretes into the container, is to put things out into the application. We can do that with Dockerfile command (COPY), but this is completely insecure way. It breaks the container image, breaks the lifecycle. Because it starts to do the runtime in the buildtime. Moreover, secrets injected to the image can be easily restored by the threat actor. The secrets can be mounted as a volume, the same like with the stateful application but that makes this secret difficult to share between two applications, between two different hosts and forces us to put it in plain text. The most common method is to put secrets and configuration parameters, as environment variable. The most secure and nice way is to manage the secrets using any kind of secret store, HashiCorp Vault or Amazon Secrets Manager Service. At this point, we still use environment variables as well, but instead of putting the secretes into the variable, we will just put an endpoind, URI. Metadata information and the logic of the application should be looking for secret. We can even include the very specific version of the secret. Let's take a look at the summary. Only the secrets kept in separate secrets datastore are easily encrypted in transit, are easy to be shared among all of the applications, can be encrypted and are easy to be rotated. Let's do some labs to understand the topic better. 51. $ secrets.txt: Let me quickly proove. that, it is not a good idea to put the secret in the Dockerfile at all. Let, let, let's `cat Dockerfile`. And as you can see, we are copying the secrets, `secrets.txt` to temp directory. And... in the next step we are removing the secret. So we take `centos`, we copy the secret, and then the next step, we remove this secret. Let's `cat` the secret value. So the secret value is `718`. And let's build this image. One more time. Let's show images. We have just `centos:8` - nothing else. Show all images, we have just `centos:8` image. And let's build this image. `build .`. Normal steps. `FROM`, `COPY`, remove. Let's type `docker images`. We can see is that we have an image `f07`. But if we type `docker images -a`, we can see way more images. We can see an image that was created 12 seconds ago. And it's called `f8b`. Take a look. What is the relationship between `f07` and f8b`? So `docker inspect f07 | grep f8b`. And as you can see, the `f8b` (`f0b`) is a parent image over `f07`. Awesome. So let's run a container from the parent image. And type `cat secret`. Secret has which automatically "restored". (Paprarara) Do not put secrets into your `docker image`. 52. $ docker run -e MYSQL_EMPTY_ROOT: As I said, environment variables are the most common way of configuring and inputting secrets into your application. Take a look. If we have `mysql` image, we have many environment variables. We can configure it. And this is what the operator, the creator of this image, assumed. Pre-provision this image to work with the following environment variables. The most important are `MYSQL_ROOT_PASSWORD`, `MYSQL_ALLOW_EMPTY_ROOT_PASSWORD` and `MYSQL_RANDOM_ROOT_PASSWORD`. One of these is necessary because either we have to specify the root password with like immediately with the plain text or we will allow empty password. or we are, we will generate the password. The others result are nice. It's `MYSQL_USER`, `MYSQL_PASSWORD`. If they are, if they're configured, they will create the user, like, the image, the `entrypoint`. will create the user and we'll create the user with following password. If we additionally specify `MYSQL_DATABASE`, this user will have granted all permissions to this database. So we will have all the access from this user to this database. Let's do some lab. If we type Docker run MySQL `5.6.50`, we will see the following problem. "Entrypoint for..." As I said sometimes we need to have a script to pre-provision our app, to build a cached locally in the container or pre-provision the container to be waking. And in this case, entry point creates a sample data, for the MySQL: like root password and prepare the permissions for, the database to be running, We have to specify one of the three three. Either we specify root_password or allow empty password or `MYSQL_RANDOM_ROOT_PASSWORD`. Because this is laboratory, for me, it is completely sufficient to specify, allow empty password. So we will be able to access `root` without any root password. So true. I will run in the detached mode, Let's enter `logs`. As we can see, the holes startup processes up and running. And the tool last lines say `mysqld` ready for connections, socket, port 3306, MySQL Community Server. If we type `docker ps`, we can see that this container is up for 22 seconds. If we enter to this container, we can access MySQL without any problem. And it's just working. Let's do one more lab with user and the password to show you how it works. If we configure the user and the password. 53. $ docker run -e MYSQL_GENERATED_ROOT_PASSWORD: Before we will jumped the lab with the user and the password, which let me show what's going to happen if we configured the other environment variables, so, `MYSQL_RANDOM_ROOT_PASSWORD`. Let's do that. <Enter> We have another container ID, `docker logs -f` from on this container ID. Let's jump up. Proxies. Here it is. Entrypoint generated `root` password. This is the generate a `root` password. And let's jump here, type `docker ps`. We can see that we have `wonderful_taussing` this is the container, MySQL container, created four minutes ago with empty root password. So let me prove that it actually works as expected. And now if I type `mysql`, I can see I can access the MySQL root without any problem. And lets enter here, `2931`. And if I type `mysql`, I can no longer access, but if I type the generated root password. So `b1a`, so `docker` or rather `mysql` `p` and type the password. I can access it without any problem. 54. $ docker run -e MYSQL_USER: Let's do the last laboratory with MySQL User and MySQL password. So I can type many environment variables. Let's type `MYSQL_USER`. Let's call it `user` and type `MYSQL_PASSWORD` and call it `pass`. So I'm generating random root password. And then creating `MYSQL_USER` and `MYSQL_PASSWORD` to `user` and `pass`. Lets add another - `MYSQL_DATABASE` and call it `db`. Enter. So I have three containers. The last container has been created without my `MYSQL_USER` and without `MYSQL_PASSWORD`, without `MYSQL_DATABASE` and allowing the empty connection. This one was created with random root password without `user, password` and `db`. And this one was created, with random root password but with the user and with the password. So let's enter to each of these. `docker exec -it`... `bash` And if you `env`, you can see is that we have the following environment variables. And as you see, I have created a process, rather `docker container top`. So the processes that are entered to this container have access to every environment variable. So even if I enter and I type `env`, I can see all environment variables including the secret, my secret password, including the database and including the user. So if I type `mysql -u user -p`, and I type just `pass`, I can access MySQL. So this is the outcome of environment variables. If you puts the environment variable, the sensitive information like pass to the environment viable, whoever would be able to access it through many cases, like `php`, with `getenv` so just `getenv or with just `env` in the `bash`. he will get access to the sensitive information. 55. VI. Networking: In this chapter, we will explain all of the networking interests is in the containers. We will discuss all o possible drivers like none, host and bridge. None is just a first driver. And it does not create any kind of interface and vteph and it prohibits external connection to the container as well internal connection to the containers, so container container communication. It is useful for network isolation or for containers that do not require any kind of network access, but just do the workloads like CPU and RAM usage. And the bridge the host has its own physical interface. Its natural. With the bridge network container gets a vteph interface, a virtual tun/tap interface. The virtual interface is connected through the bridge (docker0) to the physical interface. With proper routing configuration on the host, we are able to pass the request from the host to the container interface. Host network shares, the whole network and interfaces and binds to the namespace of the host. Let's some tasks and some exercises, and practice. 56. $ docker network ls: Let me show all of the available networks that are available by default on the `docker host`. If we want to list networks, we type `docker network ls`. And we can see that by default we have a `bridge` network, a `host` network and `none` network. As we know from the theoretical part, the `bridge` networks creates the `bridge` and there is a bridge between `veteph` the interface and the physical interface on the host. The host network is the namespace of the host and the `none` networks isolates the container. Let's try to run a containers in all of these and see how they behave. 57. $ docker network none: The easiest driver, we can dicuss is none, None completely isolates the container. And I have prepared an image to preview that a, the following container is isolated from the internal or external access. Let's take a look at the following Dockerfile. This Dockerfile takes an image from `nginx`. So, so it's parent is its `nginx`and installs `ping` and wanted to be able to make `icmp echo` requests. Just `ping`. Let's build this image. Let's call this image `net-test`. Because we do not have in the `nginx` image we are pulling it. Then we are updating packages and installing `ping`. Let's run to contain the first container will be put in `bridge` network. So the default . And the second will be put in network `none`. Moreover, let's expose the ports. So we will expose the port `1212` to 80` for the. containers are running in the in the default `bridge`. And `1213` for the container that is running in network `none`. To configure the very specific network we have to all the parameter `--network` and then the name of the network we are going to run a container in. So 13, `net-test`. I'll let's type `docker ps`. As you can see, we have two containers running. Both are delivered from `net-test` image. And the first that that's put into the `bridge`, as we can see, has a port exposed, has important `80` exposed to the `1212`on the host. And the second, which is put in the network `none` does not have any information on ports. Take a look. Alike jumping to the container is I'll just put into the `bridge` network and verify if we can `ping` anything. Access the external network or whatever. I'll say, Well, let's jump into 263 Bosch and `ping`. This is the standard DNS setup. As we can see, we have our response. Let's do the same. But doing for the container put into network none. ... "Network is unreachable." Let's verify what, why it's logged on. If we inspect both, we can see that `263` is put into network `bridge`. And this network has its `NetworkGateway`, `IpAddresses`... et cetera, et cetera. If we go and if we inspect the the container `a700`, we can see that, of course, there is network. We have this computer attached to. There is a network, it's called `none`, but there is no IP address, the reasonable gateway, there is no prefix. There is no IPs are delivered for this container. 58. $ docker network host: The other important network driver we can discuss network `host`. Let's run the two containers. The first will be put into the `bridge` and the second will be put into the `host`. And I will show you how the port forwarding works and process is actually binded to the very specific host (and port). So let's run the `net-test`. As before and less exposed to `1212` to `80`. Now let's run a another container `net-test`. But instead of putting thought into the `bridge` network, we will put thought into network `host`. So rather tried to expose `1213`. So, `--network=host`. This configures the network for this specific container. And let's put this network to the.... Let's put this network to the other container created from ImageNet dust. "Published ports are discarded...". "...when using network host mode." It means is that. It did do nothing. It's, nothing has happened for publishing. If we take a look, we can see is that. `716` does not have any information on published ports and exposed ports. And `22de` has information that `1212` is forwarded to `80` to this container. Lets us escalate our privileges and let's type `netstat`. Now let's show all of the information about listening sockets. So show processes, show listening, do not do translate the port to the name, show tcp and udp. And as you can see, 1212 is listening. So 1212 and our docker-proxy is a process of allocated the socket at the same time. `80` is so listening and `nginx-master` is a process of allocated the socket. If we type `ps`, we can see that, well all, let's show the tree. We can see that this process exists and we have two engineers processes, but only the first one are located the socket. Let's enter to these containers. If we enter the containers and is put into the `bridge`. So let's put it and let's try to `curl` at `localhost`. It works. `localhost` works because we `curl` our local. `nginx` container. Let's jump into the container, which is listening on the `host` network. So Enter. And curl `localhost`. It's working as well, because this container is listening on eighty. So let's do one more thing. Let's chordal, uh, from the container of the host, `localhost:1212`. Its working. Why it's working? Because we have attached the host network and all listenings sockets to this host network to this container. So, precisely, we could also try to `curl` or `ncat` comes from this container. `22`. As you can see, we were able to connect to the SSH from the host network. Let us try to do the same from the perspective of the bridge. Impossible. Connection refused. Host network exposes all of the stock network namespace to the container. 59. $ docker network create bridge: Inter container communication. If we type `docker network ls`, we can see default `bridge`, `bridge`, `host` and `none`. We discussed these network drivers. So assume we would like to put two containers and connected between each other like MySQL database or anything else. So, let's run two `nginx` containers `docker run -d `nginx` and let's call them `nginx-10` and `nginx-20`. So `--name=nginx-10`. And `nginx-20` in the different order, reverse order. So we have one container started five seconds ago and the other three seconds ago. They both are members of the `bridge` network. Let's verify it. If I type `docker inspect`. I can see that it's connected to `bridge` network. And if I type `docker inspect` on the second one, I can see that it's connected to the `bridge` network. Can they reach each other? `docker exec -it`. `bash` If i type `curl nginx-20`, Name is not resolved. curl `nginx-10` Name is not resolved. The question is why? Because the default `bridge` does not provide you links to the, to the containers does not provide the default DNS resolution. You can do that without any problem, but with own created image (network*), So, own created `bridge` network. So if I do `docker ps` and let's remove these networks, these containers. Let me create my own `bridge`, `docker network create`, `-d driver=bridge, myown`, And run the same commands. but instead... of putting them into the default `bridge`, let's put them in the network... `myown` Ten is running, twenty is running. If I type `docker ps`, I can see both of these are running as they are members of network, `myown`, with their own aliases and the names. So let, let's verify how it works. Let's enter the name `nginx-20`. And curl `nginx-10`. Accessable. Curl `nginx-20`. Accessable. In own `bridge` you can access the containers that are within the same breached network without any problem. Of course, there are many differences between user-defined networks and the default `bridge`, like automatically NS resolution without linking like provided by the isolation so the containers that can communicate to user-defined breach, Well, it is guaranteed that they will not be able to communicate between other bridge so that only the scope of the network works, et cetera, et cetera. I would highly recommend you to just dump in here and... Reading that if you are not operator or this is not interesting. This is interesting (!) if you run compose what we'll do in minutes. 60. $ docker network create host: As you can create without any problem and other `bridge` network, you can not create another host network. Let us try to create another `host` called `host2`. And you can see that. Only one instance of host network is allowed. 61. $ docker network create null: And at the same time you can not create the `null` driver. So if we typed `docker network ls` and you'll see that we have a `none` network with `null` driver. So we, this is the isolation `dd` network we will try to create `didi` network with the `null` driver, we get the information. Only one instance of the neural network is allowed. 62. VII. Compose: Let's talk about compose. A possibility to connect your application into stack of the containers locally. How to build a fast stack of the containers locally? Every application lives in the ecosystem. You need a database. Probably you need a caching service or mocked external APIs. In the end, we would like to start our application in front of the web application firewall. Whatever. The solution to build a fast stack of dependencies using containers is docker-compose. Technically, Docker Compose is just a tool that analyzes the textFile. (docker-compose.yaml) written in the YAML, so yet other markup language. This file contains all of the necessary information starting with API, ending by the services and volumes you have to attach. We have many services. Some services rely on each other. And in a very specific version of others. we have to upgrade one and we have to **remember** to update **the same** components in other stocks. So we have a dependency hell. The natural solutions to these problems is to build a three main layers, a keep these layers up-to-date. As you know the definitions and the theoretical terms, let's do some laboratories to understand the topic better. 63. $ docker-compose up: Well, let's start with something simple. So currently we have an empty `docker-compose.yaml` file, because... We have just specified the version without any service, without any volume and without any networks. If we type `docker network ls`, we can see that we have only `bridge`, `host` and `none`. I removed the networks we created in previous labs. If we type `docker ps`, there is no container, no container running probably there are some images that left from previous labs. That's it. So let's try with something simple. Let's try to add to our stock an `nginx` container. To add it we specify `nginx` service. This `nginx` service comes from an image `nginx`. And that's it. Don't do anything else at this point. Let's take this file and to make the whole stack up, you type `docker-compose up`. And if type `docker-compose up`, it'll say. Building from native built, native build, doesn't matter. Creating network. So it created the network with the default network driver in our case - `bridge`. And it created `docker-compose` `nginx`, Like we attached to `docker-compose-nginx`. And we can see the standard, the very standard output. If we open the next terminal, if we type `docker ps`, we can see that our `nginx` is started 29 seconds ago. We have the same ID, so `nginx_1`, and as you can see, `nginx_1` has the name, so the name comes from the directory name, this has the name of the `docker-compose` project, you can change it. And then we have the service name and the name, like ID of the container is the first container, the second, et cetera, et cetera. And it's normal exposed port. If we want to run in detached mode, just close it. And add `-`d, it will make the whole stack up, But we will not wait, we will not attach to that. If we type `docker-compose`, we can see that we have `docker-conse_nginx`, create a minute ago and up 4-6 seconds. 64. $ docker-compose.yaml: As we have Docker compose to installed. Lets discuss about `docker-compose.yaml` file. As we said `docker-compose` file is nothing else than just another file with YAML. Yet Another Markup Language. It is simple file. It mostly contains four sections. Version - it specifies division of the docker-compose API. How does file looks? It specifies how this files looks. Services. In this section, we have list of containers that are connected to each other. In the volumes... we have a list of volumes that we will be using and will be attaching them to the containers and networks. This is the list of networks we will be provisioning while building this stack. 65. $ docker-compose volume: As we know from previous laboratories, MySQL creates its own volume by default. So if you'd like docker ps, we can see that our MIC coal is up and running. It's lucky compels MySQL one effect, if we type Docker volume ls, we can see that there is one volume need to 73 If we would expect 81. So docker inspect 81. We can seize on 81. Has a mopped has a mopped mount 0.2739. To divide up my SQL into destination of the container. I wouldn't recommend you to live it to the oxidant to, to the tool to manage the volume because you could remove it, forget about it, et cetera, et cetera. Let's specify the volume, but plow first, let's put the whole stack down. If we type Docker compose down help. What does, does it actually does the following. It stops and removes Containers, Networks, and images created by up. But it maybe by default not remove images, it does not remove the orphans. So Let's put down V. It will remove named volumes declared in the section of decompose. Of course it right, no empty and remove all phones. So it will remove by default all of the containers that were orphaned. Bound v, put the whole stack down V. And now let me attach or create a managed volume. Of course we need to add volume here. And we would like to add a volume called MySQL data to denote a vote with the service name MySQL. And this may SQL data has a driver local. Then we want to specify at MySQL a driver or rather the volume known so volumes. And here the volumes is a array. So I want to mount my SQL data advisor leap MYC cold. So MySQL data, MySQL data should be mounted or Varley MySQL in MySQL container. Let's try it. Docker-compose up Dee, we are recreating every sin network. We are creating a volume, every type Docker, volume, ls. We can see that we have Docker compose MySQL data. It's named, taken immediately from the project name and the volume name. And if we type docker-compose ps, everything is nice and clean. Our MySQL container is up. Or if we type Docker compose down v dot v is important because it's only stops the containers. It removes containers from the host and it removes the network, and it removes the data. 66. $ docker-compose flask: The application alone, so alone `nginx`, alone `nginx` and alone `mysql` doesn't do much. Let's do something useful. So I want to change the `nginx-20` to Flask application. So it's Flask application. This Flask application will be our Flask. We build the Dockerfile in the Buildtime time. So instead of taking an image Flask, we will use build. And we will build the context dot. As you can see, I have put in this directory a `Dockerfile`. This `Dockerfile` is the `Dockerfile`... we created your while ending the laboratories with Buildtime. And as you can see, we have `requirements.txt`, `src`. So let's try to `docker-compose ps` and put the whole stack down with `docker-compose down -v`, and let's try to build it. `docker-compose up -d`. Take a look. We have created natively everything was needed. And we automatically build our application without putting any `docker build`, without doing anything. We automatically built all necessary dependencies. As you can see, we have a warning is for service Flask was built because it did not already exist. And if we type `docker-compose ps`, we can see that we have a `flask` with ports `5000`, MySQL with port `3306`. And `nginx` with port 80. If we typed all `ps`, we can see that they are up and running. `docker compose` an image, `docker-compose_flask` and it run the Python command, normal Python command. If we enter to this container `exec -it`, Bam. Enter `Bash`. And we'd have career `localhost:5000`. We can access it without any problem. The next step is to create an isolated environment, So that, we will be able to just expose the port on `nginx:80`, without exposing any ports on Flask directly so that only the Flask up will be automatically built without exporting, exporting any other ports. 67. $ docker-compose down / up: Let me prove is that every sin will be working from scratch. So to do isn't our remove everything and I will remove images, all images used by the services. I'll remove volumes and remove or fonts containers. All font-style is nice and clean. Nothing more found. It's clean. Nothing has changed. And now what we're gonna do is he going to make it up, docker, compose up d. We are pulling all necessary images, re-building everson. As against he, we create an engine X loss can MySQL docker, ps, engineers, flask and MySQL. And our intention is to make the situation where we exposed port, port 80 and make engineers waking as a reverse proxy. Let's start with exporting port 80, docker-compose up d. By Bo. Only engine extendable three created docker ps. We can seize up 80 with support if we taught her localhost 80 from the host, we can see engine X. We want to make our enter next container where I can as a reverse proxy to these odd instead of taking an image from, from Andrew. Next, let's build it, like we do it with a flask. So let's build it. But instead of putting, because if we would leave it like that, we will take Docker file and the context dot. So do not cheat the context, but the context dot, but specify the Docker file and Spotify Docker file engine X. Rebuild it. So lovely, compose up D And remember, if we talked dash, dash built, it will rebuild images. If we type docker ps, we can see is up now we have Docker compose engine X. Instead of engine acts as a source. I don't like the name engine extends, so let me change it. And let me say that this is just an engine X docker-compose OBD-II built is failed. It already allocated. Not true. Keynote stall the service. It's already allocated. The built. Okay, it's top and genetics. So we can stop only the specific service they built. And now as you can see, we have export rebuild it engine X. And if you'd like curl localhost 80, we are able to access the flask. Why we are able to desert, because we rebuild the container of an engine X and we copy it. So we take an engine X and we copied to ETC engine X company. If we take a look at dot or frequencies that we created a location route and dislocation route as a proxy pass for flask. So we proxied the connection to the flask. As off the flask is the name of the flask is the name of our backends. And this backend is connected to the fronted in the backend. 68. $ docker-compose mysql: Let's try with something a little bit more complex. So let's enter docker compose and let me try to add MySQL database here, MySQL. Of course we will take image `mysql`. Let's take `mysql:5.6.50` because this is what we had locally. And lets me type, `docker-compose up -d`. We know that a `mysql` image will not start without properly configured environment variable. Take a look. Building a native belt,.. Starting `docker-compose_nginx-20_1`. These are the services we provisioned before and we are creating `docker-compose-mysql_1`, if we type `docker-compose ps`, we can see that.. Ok. First one it's up. The second is up. And the first one, `mysqld`, didn't start. If we type `docker ps -a`. And let's jump into this, this container. So let's see what happened `logs -f`. We can see of course, the same issue we had before. You need to specify one of the following environment variables. Let's try to specify it. To specify environment variable. You have to specify `environment:`. And then it's either a list or a mop. Let's me add `MYSQL_ALLOW_EMPTY_PASSWORD`. So I'll add here, environment variable `MYSQL_ALLOW_EMPTY_PASSWORD` make it `true`. It's important that the values here, the values here, should be string. So if something that could be treated as a Boolean, so like that. It will fail. Let's make it up once again and you'll see it's up to collect. `MYSQL` type is true which invalid type, type it should be string, number or null. So we have to make it string, Do it. Make it up once again. And as you can see, we are creating `docker-compose, mysql_1`. `docker-compose ps`. Important. Notice that if we type darker or other `docker run -d nginx` next we'll run another `nginx` container. But if you'd like `docker-compose ps`, we can see only services related to our docker-compose stack. And as you can see, we have `mysql` running on port `3306`, success. If we type docker ps, it's up and running. If we jump into logs for the `mysql`, so `docker logs -f`, the ID. We can see that it is listening and ready for connections. Perfect. 69. $ docker-compose network: Let's create two more netowkrs. So for example, we, maybe, would like to create frontend and backend network. And, maybe, we would like to create a separate network for the MySQL database. Let's create three networks. Let's create network backend. With driver: bridge. Let's create network... `frontend` with driver bridge and lets create a network `db` bridge. And now let me connect the MySQL database to network `db`. `networks: db`. The next I would like to connect `nginx` to the networks `frontend`, `frontend`. As we type `docker-compose ps`, We can see that, well, nothing is up, no networks. There are some networks that are defined but not used by any service. This pertty useful warning. Let me type `docker-compose up -d`. And what's gonna to happen? Take a look. "Some were defined and not used". It is the `backend`. And what are the does? It creates the `frontend`, the `db`. So it doesn't create the backend network. It creates the volume `mysql_data` with `local` driver. And then it creates our containers like `nginx-20`, `nginx_1`, `mysql_1`, if we type `docker volume ls`, we can see `mysql-data`. If we type `docker network ls`, we can see `docker-compose_db` and `docker-compose_frontend`. Let's collect `db` network to second `nginx`. So connect the `backend` network to `nginx-20`, `docker-compose up -d`. Take a look.. Now we use the `backend`. So the `backend` was created. There is no warning. And we have recreated `nginx-20`. `docker-compose ps`. As you can see all of the containers are up `docker ps` all the containers run `docker network ls`. We can see `backend`, `db` and and `frontend`. But all of these are `bridge` networks. 70. $ docker-compose locust: Although we shown examples of how to Python image, or Go Image, or we shown examples... How to work, to build your own environment. Well, actually, this is not necessarily the only use-case. For example, we can use `docker-compose` from Elastic and build the whole Elasticsearch cluster that is ready to be used. It has three Elasticsearch nodes, one shard, so the whole Elasticsearch stack, is up and running. And we can build with just one command. We can use `locust`. So the, so `locus` So the tool for load-testing. So that we can run the master and run the worker and verify our Locusfile tests. So we can make load testing, or the integration testing from Locust. Without any problem, we can spin up as many workers as we want to, without any problem. They will registers themselves to master, and that is working. So only your imagination blocks you have other use cases. You can build an image... with the all CI tools that are necessary to test your application. You can build a toolbox for operator. You can build a toolbox for pentests and on auditors, We can build a toolbox for blue-teamers. If you are using any kind of complex environment, you can build a tool with Terraform, with Terragrunt, with all of the staff here using to build your infrastructure. Many, many use cases. You can use it whenever you wish to. 71. VIII. FAQ: In this chapter, we'll answer frequently asked questions. We are often asked What's wrong with SSHD in the container? How can I connect to the environment running the containers to the SSH? This is often missunderstanding of the containerized world. Mostly, in the containers we do not enter or access the application in the runtime. We built our application and the ways that you're able to troubleshoot application from APM or logging or monitoring solution. If the problem happens and they often happen, it's natural. We add reliability to our application. Moreover, we add information to logs. We add better exception handling. But we do not troubleshoot the application in the practice, we can exec, of course, to the container with docker exec, like we did in previous labs. But we do not enter with SSH. At the same time we do not run crond in the container. It mostly does not make any sense. Of course, you can run scheduled tasks and jobs using an abstraction layer. For example, Kubernetes CronJob, but this Job is executed in one container. A clever tries to implement crond, (legacy!) in the containers. If you have to, probably is a sign that you should not use containers in your project or you misunderstood the containers. 72. $ docker host security: Let me quickly proove you. If I have access to the socket or I can run the docker container, I can actually escalate, pretty quickly, escalates my privileges to the superuser for many, many ways, in many, many ways. So lets run a docker `docker run -it`, and let's mount the `root` filesystem to the `/rootfs`. Lets mount `run/docker.socket` to `run/dokcker.socket`. Let's run a Ubuntu to, and let's the Bash shell. If we enter, we can see that we have a user ID, zero, so we are root. So we can `chroot /roofts`. Without any problem. Here we are. I can just object install Vim. I can install Vim with on any problem now I can change the password so I can type `passwd` for the root in the system. So I can type `haslo123`, and `haslo123`. Pretty secure password. :Yay: And now I can exit this VPS. Now I will remove all my keys, and now I will try to log in as root, so `[email protected]` and I type `haslo123`. And I'm. So whenever I can just change the password. So I can well.... Here we are.