Docker Essentials: From Zero to Mastery | Olha Al | Skillshare
Search

Playback Speed


1.0x


  • 0.5x
  • 0.75x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 1.75x
  • 2x

Docker Essentials: From Zero to Mastery

teacher avatar Olha Al

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

    • 1.

      Intro

      1:59

    • 2.

      What is Docker? Key Differences from Virtual Machines and Its Advantages

      7:21

    • 3.

      Installing Docker and Writing Your First Dockerfile for a Python Application

      5:23

    • 4.

      Writing a Dockerfile for a JavaScript App and Solving Port Conflicts

      3:38

    • 5.

      Working with Docker Hub: MongoDB Image Setup. Connecting to a MongoDB Database in a Docker Contain

      10:53

    • 6.

      Understanding Docker Volumes: Types and Use Cases

      7:25

    • 7.

      Exploring Docker Volumes: Anonymous, Named, and Host Volumes in Detail

      7:31

    • 8.

      Understanding the Difference Between -mount and -v for Volume Mounting in Docker

      3:20

    • 9.

      Introduction to Docker Compose: Creating Your first Compose File for Flask and MongoDB

      7:37

    • 10.

      Introduction to Dockage: Easy Docker Compose Management and Understanding Image Layers

      4:31

    • 11.

      Advanced Practice: Archiving Docker Images and Restoring Them Across Machines

      3:57

    • 12.

      Bonus: Introduction to Pyenv and Virtualenv: Managing Python Versions and Environments

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

22

Students

--

Project

About This Class

Learn Docker step by step in this practical and beginner-friendly course. We’ll cover everything you need to get started with Docker, including how to use basic Docker commands, create and customize Dockerfiles, and write your first Docker Compose file for running multi-container applications.

You’ll also learn how to manage Docker containers efficiently using Dockage, handle data persistence with all types of Docker volumes (named, anonymous, and host-mounted), and save Docker images as archives for easy sharing.

In addition, we’ll dive into Docker Hub, where you’ll learn how to upload your own images, download images from the registry, and use them in your projects.

As a bonus, we’ll explore Pyenv and Virtualenv to help you manage Python versions and virtual environments effectively.

By the end of this course, you’ll have the skills to confidently use Docker for development, deployment, and managing containerized applications. Perfect for beginners and developers looking to boost their productivity with Docker.

Meet Your Teacher

Teacher Profile Image

Olha Al

Teacher
Level: Beginner

Class Ratings

Expectations Met?
    Exceeded!
  • 0%
  • Yes
  • 0%
  • Somewhat
  • 0%
  • Not really
  • 0%

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Intro: Hi, guys. Welcome to the Docker Essential course. This course, we're diving into Docker from the beginning. Whether you're complete beginner or just looking to refine your skills, you've come to the right place. We'll start with the basics exploring what Docker is and why it's become such a powerful tool in modern development. From there, you will learn how to use Dockers core commands, enabling you to manage containers and images with confidence. Next, we will move on to creating and customizing Docker files, where you will gain the skills to build your own Docker images step by step. We will consider two examples for Python and four Js. In addition, we will explore Docker Hub, the popular registry where you can store, share and download Docker images. You will learn how to upload your own images to Docker Hub and how to download existing image from it, streamlining your workflow and enhancing calabration. We will consider what Docker Compose is, and we will write our first Docker Compose file. To make managing your containers even easier, I'll produce you to handy tool called Dodge, which simplifies container management and boost your productivity. We'll also cover practical techniques like saving Docker images to archive, a useful skill for backups or sharing images between systems. In this course, you will also learn about the different types of volumes Docker offers. And as a bonus, I'll guide you through tool like PMF and Virtual ANP, which are essential for managing Python versions and virtual environments. By the end of this course, you'll have a solid understanding of Docker and the confidence to integrate it into your workflow. Let's get started and take your skills to the next level. 2. What is Docker? Key Differences from Virtual Machines and Its Advantages: Hello, guys. Ready to simplify your software development process and get rid of it works on my machine problems. So you're in the right place. Docker is a powerful tool that changes the way developers build ship and run applications. It uses containerization technology to package applications and their dependencies into standard units called containers. So why is Docker important to learn and the first thing as consistency, Docker ensures that applications run the same way across different environments from a developer's laptop to production servers. Next, portability. Containers can run on any system that supports Doker making it easier to move applications between different platforms. Scalability, Doker makes it simple to quickly scale applications up or down, and it's perfect for cloud deployments and microservices architectures. Dogerimages can be versioned, making it easy to roll back to previous version if needed. Isolation containers provide a level of isolation between applications, enhancing security and reducing conflicts. Containers can be started and stopped in seconds, speeding up the development and deployment process. Efficiency, Doger containers are lightweight and share the host systems kernel using fewer resources than traditional virtual machines. By the way, as I said before, Doger provides isolation and speaking of virtual machine, don't be confused because this isolation with virtual machines it's somewhat different. The main difference between a virtual machine and Docker lies in the approach to utilization and resource isolation. So the Virtual machine emulates a complete computer system, including hardware, operating system, and additional software. Each virtual machine has its own separate operating system running on a physical machine host. It requires separate resources such as memory, CPU time, and disk space. Launching and managing virtual machines require significant computational resources and time. Also want to note that virtual machine images are typically much larger because they include a full operating system, along with all necessary libraries. Let's return to the Docker. Docker utilizes the concept of containerization where applications and their dependencies run in isolated containers. Each container shares the host operating system but remains isolated from other containers. Containers share host resources such as operating system, kernel, and CPU core, enabling efficient resource utilization. Launching and managing containers is a quick and lightweight process, and they utilize shared operating system resources, as I said before, Containers use base images that contains only necessary components and can be quickly created and deployed based on those images, and containers are lightweight, as they only include essential components and dependencies. So as advantages of Docker over virtual machines, we can note the following points. Faster containers, start up and shut down, efficient utilization of system resources, easier of scaling and managing containers, application standardization and portability, development and deployment convenience. However, virtual machines may be more useful in specific cases that require complete isolation or support for different operating systems. Docker, on the other hand, is commonly used for application deployment, microservice architecture, rapid development and testing. Well, in today's Docker has become an essential tool. Docker eliminates. It works on machine problems by providing consistent development environments. Docker makes it easier for team to share and collaborate on projects as everyone can work with the same containerized environment, and Docker is ideal for building and managing microservices based applications. Besides, Docker integrates seamlessly with CICD pipelines, automating the testing and deployment process. Well, before we continue, let's talk about important components. It's necessary for a general understanding of Docker technology. Docker file is a text file that contains instructions for automating the creation of Docker images. It defines all the steps and configurations required to build a container with a specific run time environment and software. To build an image from a Docker file, you need to use the Docker build command, which reads the Docker file and executes all the instructions to create the image. An image in Docker is a template or blueprint used to create containers. It contains all the necessary components to run an application or service, including the operation system, executable code, dependencies, configuration files, and other resources. The number of containers that can be run from a specific image is limited only by your requirements and resources. A container in Docker is a separate isolated environment that contains all the necessary components to run a specific program or service. It provides software isolation, including dependencies, libraries, configuration, and the execution environment. If you want to instruct Docker to perform a specific action such as creating a new image or launching a new container, you'll need a command line interface too. The CLI allows you to work in the terminal and execute commands. It's not always necessary for the Docker CLI and Docker host to be on the same machine. They can be on different servers. For example, from your local computer, you can use the CLI to interact with a remote server where a Docker is installed. Here I show typical and common commands that you will use most frequently. Let me quickly explain how it works. So Docker build will create a new container image from which we can start the required number of containers. Containers are started using the Docker Run command. There is also another entity, the registry. It's a centralized storage, where developers can upload, store, and share their images. If you haven't created images from scratch using Docker file, you can use the Docker Poll command, which will automatically download an image. Then with the Command Docker Run, you can start containers. Today we will get acquainted with Docker Hub. Doger hub is a great opportunity to use ready made images. It's a fantastic resource for using pre built images. It offers a vast collection of images created by other developers. So you don't need to create your own image from scratch with a Docker file. Simply select a premade image. Download it using the Docker Pull command, and you can start using it right away in your project. Cool. But first, let's write your own Docker file and build your own images for your app to start in containers. So let's get started. 3. Installing Docker and Writing Your First Dockerfile for a Python Application: Welcome back, guys. The process of installing Docker may change over time as new software versions are released and updated instructions become available. Therefore, the best approach is to check the official Docker website for the latest information and follow the provided instructions here if Docker is not already installed. For Windows and Mac OS, you simply download the file and run it on your local computer. For a Bunto, you will need to execute several commands according to the instructions. There are graphical user interfaces available which are more user friendly, but knowing how to work with the terminal is important because in the case of working on a server without a user interface, it's necessary to be able to work with the console. I will demonstrate working with Douker through the terminal. I will show several examples how to create Docker files. The first one will be for Python. I'm going to create FAS application, simple FLASK application with Hello world. Don't worry if you don't know FLASK. This will be the simplest Hello World application. You just need to rewrite a couple of lines of code. This will be necessary for you to understand how the Docker file works as a second example, we will Dkerize JavaScript app. It will be also simplest Javascript app with something like Hello world. So I create file and input flask. FLASK it's a framework of Python. Here we create an instance of the application. Define the homepage where the message will be displayed. And run the Flask server. If the file is executed directly, not imported, the virtualens check my virtual environments, and I see that I already have one. Then I activate it. For those who are not familiar with that, you can watch my video as a bonus at the end of this course. It's a great tool that allows you to set up different development environments. Everything you install there will be isolated from the system environment. The Command PIP freeze will show me all libraries I have in my virtual environment. I'm installing flask here and running the application. And we can see our work in the browser. Now let's package our work into a Docker. Let's write our first Docker file. I create Docker file, and here I'm going to use Python three dot nine. This line specifies the base image for your Docker container. In this case, it's the official Python image with version, as I said, three dot nine. Then I need to install the FAS and J Unicorn packages using PIP. PIP is a package manager. Ask is a framework for Python and Unicorn is a server, which is used to serve the FLASK application in a product environment. Then I set the work dear directory. It means that from this point, all the commands will be executed in the app directory inside the container. If the directory doesn't exist, Docker will create it. Next, I copy the contents of the current directory where the Docker file is located on my host machine into the app directory inside the Docker container. This typically includes your FASC application code and any other necessary files. Next, I'm going to expose port 5,000. It means container will listen on port 5,000 at runtime, and then I specify the default command to run when the container starts. In this case, it runs G unicorn to serve the Flask application. Here we start the unicorn server. Then tells unicorn to listen on all available IP addresses because we set 0.0 0.0 0.0 on port 5,000 and then specify the application entry point. It assumes that your FLASK application instance is named app and is defined in a Python file named app dot py. So we have our first Docker file. Next, we are going to create an image from this Docker file. Why? Because we are going to run from this image, our first container where we will have our flasp that we created earlier. The command Docker build is used to create a Docker image from a Docker file. Then minus T, it's a flag, allows you to tag the image with a name. In my case, it will be myApp. The dot at the end of the command specifies the build context, which is the directory containing the Docker file and any other files needed for the image. In my case, the dot represents the current directory. Docker runs this instruction and executes them step by step. At the end, we have the image. The Kaman Docker images shows us all images that we already have and here we see our Mapp image. We have the first image. Let's create another one. 4. Writing a Dockerfile for a JavaScript App and Solving Port Conflicts: Welcome back, guys. So let's create the second Docker file for JS app. And now I create Docker js folder, and inside, I'm going to create Hello JS file. I'm right here just Hello world. It's print a message to the console. Actually, it helps developers debug their code by providing real time feedback, but in my case, it's just simple file with JS code that we are going to package in Docker. Let's create another one Docker File. In this Docker file, the base image is not JS poten. This will be the initial layer in our Docker image. And all subsequent layers build upon it. When we start building our Docker image, the docker will pull the nod 14 image from the Docker Hub if it's not already present on the host machine. This image includes no js version 14 and its runtime environment. Then I set the working directory, and instructions will be executed relative to this directory. Then I copy a file from the host machine to the container. In my case, it will be hello dotgs and then I specify the command to run when the Docker container starts. That will run our file. Let's move to our Docker Jazz folder. And here, let's build our second image. I use the same command, just new name. Now we are building a new image from the new Docker file or different project. And here we can see our second image. Docker stores all images in its own storage system. The location on your computer may vary depending on the operating system settings and Docker version. If you are curious about the location, you can use the Docker info command. The path to the local repository might be like this or like this, but we don't need this information for our work. In most cases, we won't need to access this location. I haven't installed no gas in my local computer, so theoretically, I can run our app, but the command Docker run started container from our iquated image, where we have all dependencies that we need, and we can see our hello world. Now let's start new container with my app image. We use the same command Docker Run. Then we have to map a port on the host machine, I mean, our laptop or computer, to a port on the Docker container. And at the end I specify the name of Docker image from which the container will be created. And we have an error for it is not available. Let's check what happened. My quick search shows that the Monterey Control Center is listening on port 5000 and 7000 and port is used for airplay functionality. I don't want to turn anything off. It's not difficult for me to change the port. So in the container, we will keep port 5000, but we will map it to port 5001 because previous was occupied by my computer as it turns out, and there we have our application in the browser. 5. Working with Docker Hub: MongoDB Image Setup. Connecting to a MongoDB Database in a Docker Contain: Welcome back, guys. But creating an image from a Docker file is not the only option. As I mentioned earlier, we can use pre existing images. Let's talk about Docker Hub. It's a cloud based service where you can find and share Docker images. It's like a big app store, but instead of apps for your phone, it's for software containers. We already know that Docker image is like a blueprint or template for creating containers. And you can search for and download premade Docker images. For example, if you need a database, you can find its image on Docker Hub and just pull it to your computer. To better understand what it is, let's push our create it up to the Docker hub. This, you need to be registered. If not creating a new account won't take long time. I'm already registered, so I will proceed with the further steps. To be able to upload your images to the website, you need to log in. You can have either a private or public repository. It's up to you. Very similar to GitHub. Previously, if you had a public repository, you could push without token. But since November 2021, Docker introduced personal access tokens as a more secure way to authenticate with Docker Hub, replacing traditional username and pass this provides an additional level of security and restricts access to repositories, including prohibiting unregistered users from publicly uploading images. So let's go to the website and follow the documentation instructions. I'm going to create a token. Go to account settings, select security, scroll down to the accessToken section. Click on the new Access Token buttom. Provide a name for the token and select the necessary access rights. Click Generate, copy and close. Next, let's login through the CLI client using this token instead of the password. Great. Now we can push our images to the repository. Before that, let's tag the image. Specify the repository name space, the name of the new repository. This will be the name of our image on the Docker Hub server and the version. If we check, we can see that the new tag that has appeared, now let's push it. This may take some time. And alla, we can see our Docker image on the Docker Hub server. Here it is. Now let's delete the remaining image on our computer. And for that, there shouldn't be any running containers based on it. The command RMI, shortly from remove image, we remove image that we just push to Docker Hub server. The first time Docker has removed the tag associated with the image named my app, but the underlying image data has not been completely removed, so I should repeat this command. Here we can see that the image was deleted completely. So if we want to use this image and start new container, we should pull this image from the Docker Hub server to our laptop or computer. For now, let's imagine I don't have any virtual environment. I don't have installed FLASk or Python or any other libraries for this app, but I want to run. I want to start this in one command. I copy this command, and I pull this image from the Docker Hub server to my local computer. If I check all images on my computer, I can see this one that we just pull from Docker hab server. All that we need to run new container based on this image with one command. We use Command Docker Run, then we map the ports as we did before. Then we put image ID, and here it is, we can see our app and the browser. We used self made images. Let's try to work with ready made images. For example, I have a project that requires MySQL or MySQL, it's up to you. The Docker search MySQL command is used to search for Docker images related to MySQL. It searches the Docker Hap registry and displays a list of available images that contain MuscuL or related components. This allows you to find different miscuLimage options that can be used to deploy MuscuL databases in Docker containers. Docker search allows you to search for images by keywords or names and browse the search results to find the needed image. This command can be useful when you are looking for a specific image to use in your project or to deploy specific software. To narrow down the number of results, you can use filters. For example, I will filter the search results to display only official images. Official Docker images are those that are usually maintained or created by the Docker team or official software developers. Usually, they tend to be more reliable and secure because they are supported by well known sources. So as a result of the search, we got only the official Docker images related to MySQL, but that's not all. As an example with Manga, I will search for Docker images related to Mongo Di B and apply a filter to the number of stars. We will specify that the selected images must have at least 20 stars. Stars on Docker Hub are used to indicate the popularity of images. The more stars, the more popular. We also can add another filter. In this case, we filtered only official images related to MongoDB that have 50 or more stars. After this, we can use the Docker Pool command and specify the name of this image or go to the Docker Hub website and simply copy this command. The Docker images command will show us all the downloaded images. At the moment, it's only manga. I will start the container from this image. So we type Docker Run in the command line. The starts the container, then name and specify a name for your container, which you choose yourself. Next minus D parameter, short for detach means you are running the container in the background. The container separation will not constantly occupy your terminal and you will not see that is happening there. Then minus P specifies the ports. This will allow us to access the application inside the container. That is our database using port 2,719 on the host system. This traffic will be forwarded to port 2,717 inside the container where our application actually runs the application, I mean database, and finally, copy the image number. Can copy the ID or its name, it's up to you and paste it at the end of the command. That's it. Our container has started. The Docker PC command will show us the containers that are already running. In our case, in the container we just started. After we have started the container, we install Mongo shell. Manhoh is an interactive command line interface for MongoDB. We needed to interact with the database. The main goal of Mongo Shell is to provide developers and MongoDB database administrators with a simple and powerful interface for interactive with databases. With it, you can quickly execute queries to the database, view and update data, or create and execute aggregation queries. So now I'm connecting to our database from the command line terminal. Using the Show DBs command, I will look at all the databases we have, and we have nothing because we haven't created anything yet with the use command followed by the name, I'm doing two things at once. I create the first database, switch to it, and start using it. I usually work at Pycharm, and it's very convenient to add the database directly into the development environment and see what is happening there. So let's do that now. We go to select adding a new database, choose Mongo D Bean specify the host, specify the port and write the name of our database. Then we perform a test connect and see that we have connected. In the settings, we will set it to show all the databases we have or we'll have. For a test, I want to write something to our newly created database through the terminal. Let's write something simple. Then we refresh our database in PyCharm and see that yeah, the data has been recorded. Here it is. However, there is a nuance. We execute the Docker PC command and see our running MongoDB container. We stop it with the Docker Stop command. Now Docker PC will show us that there are no running containers. The Docker PC minus A command will show us all the containers we have. So we have a container, it's just not running right now, and here we see our MongoDB container. With the Docker RM command, followed by the container ID, we will delete our container that we created. Let me remind you that we can only delete stopped containers. Running ones won't be deleted this way. Now Docker PC minus A will show us that there are no containers, and now if we go to our database, we will not see any records. And even more, if we start running this container again, all the data we saved in our database will be lost, and now it's time to get acquainted with Docker volumes. 6. Understanding Docker Volumes: Types and Use Cases: Welcome back, guys. Docker volumes are a feature that addresses the challenge of data persistence in containers. They are like dedicated storage areas that can be attached to one or more containers. Think of them as specialized folders that reside outside the container on the host system. Volumes serve several crucial purposes. One of them, data persistence. Containers are ephemeral, meaning they can be created and destroyed easily. Without volumes, any data generated or modified inside a container would disappear when the container shuts down. Volumes enable you to store data outside the container, making it persistent even when the container is gone. Also, it helps with data sharing. Volumes can be shared among multiple containers. This is incredibly useful in scenarios where you need multiple containers to access and work with the same data. We can use containers for backup and restore. Docker volumes simplify the process of backing up container data. You can create snapshots of volumes, ensuring that your valuable data is safe and can be easily restored in case of accidents or disasters. And finally, database management. Volumes are frequently used with database containers as databases require persistent storage. With volumes, you can stop, remove or even replace a data container while keeping your data intact. So let's go to the practice. Docker Woliam list returns a list of volume names currently existing on the Docker host. You can also use the shortened version of this command. Docker volume LS shows the same. For now, we don't have any volumes. The Docker PC command is used to list the currently running Docker containers on your system. We also don't have any running containers at the moment. But Command Docker PC minus A shows us all containers. Even that were stopped. We have one container from the previous lesson, but today we are not going to use. The Docker Images command is used to list the Docker images that are currently stored on your local system. I remind you that Docker images are essentially templates or blueprints for creating Docker containers. For this lesson, I will use the Mongo Di Bi image. We will start several containers from this and see how Docker volumes work. The dog Run command is used to start a new Docker container from a specified Docker image. Then we set name parameter. This part of command specify the name you want to give to the container. Then minus D, this flag stands for the touch mode. It tells Docker to run the container in the background. It allows you to continue using your terminal for other tasks without being attached to the container's console. N minus P. This option is used for port mapping. It tells Docker to map port 27,019 on the host machine to port 27,017 inside the container. Port 27,017 is commonly used for MongoDB, and this mapping allows you to access the MongoDB service running inside the container from your host machine. And finally, ID Mongo image. This image will be used to create and run the container. So when you execute this command, Docker will start a new container based on the specified image. Now the Docker EPC command shows our running container. Using Mongos, we connect to the MongoDB from the terminal and check the existing databases. What is Mongos, you can find out in the video link above. But in short, this is the MongoDB shell. With us MDB command, we create a new database and immediately switch to it. We create a simple collection so that the database is not empty. We use the Command Show collection to see our newly created collection, and the Command show DBC shows our newly created database. Now we have created database with the collection. Next, we use the Docker Stop command to stop our running container with the created database. Now when we run the DogerPC command, it shows that there are no running containers. The Docker PC minus A command displays all containers, including the stopped ones where we can see our container. Our created container exists, but we cannot connect it. To do that, we use the Docker start command to start it again and then connect to the database. Now, the command, Docker PC shows running container. I remind you we stopped it and then started again. Now, I connect to the Mongo to B again, and here we are. We can see our database that we created and nothing was lost. But what about if we delete our container? With the command Docker stop, we stop our container and then remove it with the command Docker ARM. Now the commands Docker PC and Docker PC minus A show us that our container is gone. With the same command with the same parameters, we start in container from the same image like we used before. Connect to the Mongo. And yes, we lost the database, and that's why we should use volumes. Docker provides various types of volumes for storing and managing container data within Docker images. Host volumes allow specific files or directories from the host system to be mounted directly into a container. They enable containers to access files at the host level, and any changes made to these files are immediately reflected in the container and vice versa. It helps when you need a container to access specific files or directories on your host system. They are often used for transferring configuration files or data between the host system and containers. Anonymous volumes are temporary volumes created automatically by Docker to store container data. They are intended for temporary storage for data with no permanent purpose and may be discarded when the container is removed. They are typically used for temporary storage of bulk data such as long or temporary files. Named volumes have a specific name and are designed for long term data retention. They are not tied to any specific container and can be used across different containers. We use them when we need to store data for the long term and provide access to it for multiple containers. In the next lecture, we consider each of them. 7. Exploring Docker Volumes: Anonymous, Named, and Host Volumes in Detail: Welcome back, guys. So let's create our first named volume. With the command Docker volume, we create volume for the mango. We name it Mongo data, and now if we type the command Docker volume LS, we can see our named volume and several anonymous volumes. They were leftover from the previous containers. After deleting a Mongo to be container, that was launched without using Docker volumes without explicitly specified volumes. Anonymous volume creating Mongo to be data may still remain. Docker retains anonymous volumes after container deletion to preserve temporary data. This is done to provide access to this data in case it's important or valuable, even after the container has stopped. It can be useful for debugging or analyzing logs. The command Docker volume LS shows all volumes we already have. If you want to delete these anonymous volumes, you can use the command Dervol ARM and name the volume you want to delete. Of course, this volume should not be attached to the currently running container. So I stop and remove the previous container that we launched. The command Docker ARM and ID container will remove it. As we can see, our container is not present in either the running or stopped containers. So now I'm going to delete our two anonymous volumes. A and I've done it successfully. So now if we take a look at our list of volumes, we will see only our named volume that we created earlier. I repeat my Docker command to start a new Docker container, except I add minus minus RM flag. It used in Docker containers to automatically remove a container when it stops running. When you run a Docker container without the ARM flag, the container continues to exist in a stopped state after it completes its task or you manually stop it. It means that you have to manually clean up the stopped containers using the Docker ARM command. But with RM flag, I'll show you what will happen. You can see that two anonymous volumes were created. Now I stop this container, and look at this. Our container was immediately removed. It's not present in the running or stopped containers. Dogger PC minus A shows us only stopped container from the previous lesson. Anonymous volumes were also automatically removed. If you want to delete all volumes at once, you can use the command Docker volume Pune. This command is used to remove all unused volumes from your Docker environment, but I won't be using it now. Right now, I have the volume mango data, and if I want to retrieve detailed information about this Docker volume, I use the command Docker volume Inspect. When you run this command, you specify the name or idea of the volume that you want to inspect and Docker provides adjacent formatted output containing various details about that specific volume. Now the most interesting part, let's launch a container with our named volume. I will list our images to get the D for starting the container. Then we execute the command. I won't repeat the explanation for this command, but I add a parameter that wasn't there before. Volume, then specify the name of our volume that we created, and then specify the path where it will be mounted. In our container, there is a data folder inside which there is a DB folder to which we are going to mount. To enter a running container, execute the command docker exec. I parameter, it stands for interactive. This option allows you to interact with the command being executed in the container, it connects your terminal to the container's terminal enabling you to provide input and see the command's output in real time. T parameter, it ensures that the command executed inside the container behaves as if it's running in a real terminal. Then the name or ID of the running docker container you want to access and bosh a commonly used Unix shell, or for Windows, it will be CMD. Here we are in the data folder, we CD B folder. We connected to Mango to B using the Mongo to B shell. Check the databases. Create our database and immediately start using it. We write some data into it, create a collection, a very simple one for demonstration purposes. For those who aren't familiar with non relational databases, let me briefly explain. In a typical relational database, we have tables, while in non relational databases like MongoDB, we have collections. If we now enter the container and navigate to our DB folder, we will see the information that has appeared there. I'll remind you that we launched this container using a named volume. Now I'll start this container and remove it. Remember that the volume will not disappear. It will remain intact. Then based on the same image, I will create a new container and connect the same named volume that we still have. The Docker PC command will display our newly launched container. Using the manga shell, I will reconnect to Mongo DB, and you can see that in the case where we deleted the container with an anonymous volume, all the information, the database and collection disappeared. But in the case of a named volume, everything remained and we can see our database and they created collection. Of course, we showed switch to this database before checking the collection. Here we are customer collection. While I was experimenting behind the scenes, I created some anonymous volumes. And now I can demonstrate the Docker Prune command in action. As you can see, after executing this command, all unused and unattached volumes associated with containers will be deleted, leaving only those that we are actively using. We have discussed both anonymous and named volumes. When it comes to host volumes, the command is almost the same as for named volumes, except that we specify the full path to the host system to the file or directory we intend to mount into our container. Host volumes allow specific files or directories from the host system to be mounted directly into a container. In the next session, we will see how we can mount specific folder in the host system directly into a container. 8. Understanding the Difference Between -mount and -v for Volume Mounting in Docker: Welcome back, guys. Now I want to explore the second method, mounting specific files or folders from the hot system directly into a container. As you can see, you can mount not only using the volume flag or its shorthand minus V, but there is also another flag called mound. Let's see how it works in practice. First, I created a folder called test DB on my host, and it's completely empty. The Docker PC command shows our previously running container, which will leave running. And now let's start a new container. The initial part of the command is the same, except instead of minus V, we specify the mount flag. Of course, we need to give it a slightly different name for the container. Next, we need to specify the type. For this example, we'll use Bind. A bind mount links a directory or file on the host system to a directory or file inside the container. This type of mount directly accesses the file system of the host. Then we specify the source from which we want to mount. This can be source or CRC for short. Notice that I'm currently in the project directory where the folder I want to mount is located. So I specify PWD to represent the path to the current directory, and then I specify the destination, which can be destination, DSD, or target. It's up to you. Now for this example, let's point out that the port 27,019 is already occupied on the host machine due to the previous containers. So I'll use port 27,018. I'll misspelling. The command will now show us both of our working containers. If we take a look at our TSDB folder, we will see a lot of information in it. And it looks almost identical to if we were using volumes, but there is a difference. When we mount using host volumes, Docker will create the folder for you if it doesn't exist already. It creates it with the same name you specify and at the location you specify. However, with mound, if you try to mount a folder that doesn't exist, you'll get an error. Let me demonstrate this by creating a third container using the same previous flag, but changing the port to 27,020 and specify testdB to add the folder, which doesn't exist. As you can see, we get an error. Docker did the work for us when using host volumes by creating the folder if it didn't exist. But with mound, if you are mounting a non existing folder, you need to create it yourself beforehand. I repeat the same situation with non existed folder, but instead of mount, I use minus V and we didn't get an error. That's the difference. I hope this clarifies the concept. Docker did all work for us and the folder test B three was created without any error. 9. Introduction to Docker Compose: Creating Your first Compose File for Flask and MongoDB: Welcome back, guys. So at this point, we have the structure, but you should understand that in large projects, nobody runs individual containers separately. For example, if we have a back end and front end and database all packaged separately with each service having its own container, we need to think about how to run them together and how they should interact. Now we will get acquainted with Docker Compose. Docker Compose is a tool used to define and run multi container Docker applications. It allows you to describe all the services and containers that make up your application in a single configuration file, usually named Docker Compose Yamal or simply Compose Yamal. It's commonly used to organize and deploy services consisting of multiple Docker containers. Regarding Docker Compose installation, it typically comes with the official Docker installer on most platforms. However, if you are installing Docker on Linux, Docker Compose may require separate installation. Some Linux distributions such as Ubuntu might not include Docker Compose in their packages, so you may need to install Docker Compose separately. You can use the official documentation or instructions provided by Docker developers for installation. It's not too complicated, and I'm sure you'll manage it. In our project, I will create a composed YamelFle and start building it from scratch. I specify the version of the Docker compose file format. In this example, I will specify version three. Each version has its own features and syntax. Usually, you choose the latest table version of Docker Compose that's supported by your system and meets your project's needs. The version you use depends on which Docker Compose version is installed on your system. You can check this with the command Docker Compose version. This command will show the version installed on your system. Mark users can find all this information in Docker desktop. Here I can see all versions. And also here we can see all updates we have and can use. But the way I see that I need to update. So let's do that. After downloading the update, we install and restart our Docker. After the restart, the window will automatically open, and we can see the new versions. So back to writing the composed file. I specify the services. Next. When I said services, I meant the containers that will be run, and the first one will be web service for the FLASK application. The build command tells Docker Compose to build the image from the FLASK application from the Docker file in the current directory. Next, I specify the ports, which is similar to how we run the container with the FLASK application earlier. We map the internal port to the external one. Then we specify an environment variable. The FLASK application can use it to connect to Mongo B next, depends on indicates that the web service depends on the Mongo service. So MongoTB will be started before the FAS application because if you're on your front end or deck end that urgently need your database, it could lead to unexpected results. It won't be able to work properly. We should be careful and use proper indentation. And then we specify the service for MongoTB. Then we specify the MongoTB image that we would download from the Docker Hub. But we already have it downloaded on our computer. Next, we specify volumes, which mounts the named Mongo data to the data DB directory in the container where Mongo DB will store its data. And then finally, we specify volumes, where we define named volumes used by services. Mongo data is a named volume for Mongo DB data. We covered this a bit earlier with named volumes. So Mongo data is the local directory and DataDB in the directory in the container. Above in the Mongo service, we mounted these two directories. To use this composed Yamo file, we need the specified Docker file for the Flask application in the same directory. In our case, the Docker file should be in the same directory as the composed amo because earlier, we specified a.in the build command to build the web service image. The dot symbolized the current directory. The Command Docker Compose minus D will start our services. Minus D means the touch, so it will be launched in the background. We received a message that port 5,000 is occupied. I will change the port to 5,002 and restart. The Command Docker Compose PC will show us that we have successfully started. If we go to port 5,002, we will see our Flask project. I will make a small change to the file. I will add port mapping here because I forgot to open our Mongo database so that we can test if our database works directly from the terminal without entering the container. After this, I will stop the container with the Command Docker Compose down and delete the image that was built from the old file. The Command Docker Compose BC will show the status of all containers. We see that nothing is currently running, no containers are started. I list all images to see name or image ID for removing. I choose Image ID and with the command Docker RAM image, I remove it. We could have not deleted the old image, but simply started our newly edited composed Yammo with the same command Docker Compose up minus D, but with build. This would have rebuilt and started our compose with a new data we added to the composed Yamo file. The Command Docker Compose BC now shows our two services. We see Mongo, we see our FLASK application, and the ports they are mapped to. We check the connection to Mongo from the terminal, connecting to the port we opened. And we see that everything works and we can connect. We also check our FAS application again on port 5,002, everything has started and it's working. At the beginning of building the Docker Compose file, we specified the version in the first line. But starting from version 12 27, it's no longer necessary to explicitly specify the version of the Docker Compose YamoFle if you are using the latest syntax version. If you are using the latest new version of Docker Compose and want to use the latest Docker Compose Yam syntax, you can completely omit the version line. Docker Compose will automatically use the latest syntax version if another version is not explicitly specified. If you still want to use the old syntax, you need to explicitly specify its version in the version line. If you are using current version of Docker Compose, you can safely omit version in your Docker Compose file. As we can see, everything started and is working. 10. Introduction to Dockage: Easy Docker Compose Management and Understanding Image Layers: Welcome back, guys. And now I want to introduce you to a tool that will make your life easier when you start working with Docker Compose. It's Dog, a tool for managing Docker Compose files. This is a simple and convenient tool for managing Docker Compose. It simplifies working with Docker Compose files, making it convenient and visual. It offers a range of features that make it a valuable tool for developers and DevOps. You can learn more on the GitHub page. Let's move on to practice. We create a directory where we will place the Docker Compose Yamal file from which our Dodge service will start. By default, it starts on port 5,001. We need to specify the path to our directory. We can specify this on the website, and here in interactive mode, they will change the compose Yamal file for us. Here we can see that the path to our directory has been automatically changed in the compose file. After that, using the command we can copy from here, we will take this compose Yamal file and start it in the terminal. The Command Docker composed PC will show us our service on port 5,001, which we can use. So let's go and take a look. We see a window where we can register our admin. So we log in. We will see something similar to another Docker tool. If anyone has worked with Pertiner, this will remind you. It's really similar to Pertiner bit. Inside, we see our project with two services running Web and Mong DB. We see them under the name Docker lesson. We also see our test Doch which we launched using the composiLfle. Notice we can't edit them. The test Dog is our service in which we are currently located. But to edit our Docker lesson with our services, we need to move this folder to the same directory where Dog is located. Here's what we do. We moved the doggerlsson directory into the test Dodge directory and now if we open our service, we see a completely different picture. Now we can edit, delete, stop, and view in the terminal in real time, what's happening in the containers. Very convenient. I would warn you better not to press delete. This will delete your project directory along with all components inside, so be careful. Another feature of this tool is that you can rewrite any Docker command into Yamelfle. You simply copy and past regular Docker command here, and it converts into a format that you can use in a Docker compose file. If you have watched up to this point, you might have a question. When we started the Mongo image, it launched a container with the Mongo database. When we started the FLASK application image, it launched a FLASK application container. Now we see our image Docker lesson, one image. I remind you that it starts as we see two containers. Now it's time to understand what layers are in Docker image. In a Docker image, layers represent the sequential steps and changes that were made during the build process of that image. Each step in building the image creates a new layer which can include configuration files, dependencies, and other components needed for the application to run. To view the layers of a Docker image, you can use the Docker history command. We specified the command Docker history, followed by the image name. Instead of a name, you can use the ID. This command will display a list of layers that make cut the Docker image. In our case, the image we created using the compose file, we can see the size of each layer, the commands executed during its creation and other information. Thus a Dkerimage is built from different layers, representing steps and changes during its build process. In our case, the Toker file for the web services will have its own layers, and the Mongo DB image already has its own layers. If we take our image which we built using the composed file, it will be a combination of these two services, web and Mongo. 11. Advanced Practice: Archiving Docker Images and Restoring Them Across Machines: Welcome back, guys. We have already seen how to upload your unique created images to Docker Hub and share them with everyone or download the necessary images from Docker Hub. Now let's look at slightly different way to share your unique created image with other developers. There is a command Docker safe. The Docker safe command is used to save one or more Docker images to archive file. This command allows you to save dog images as an archive file, which can then be transferred to another computer or saved for future use. The main use cases for the Docker Save command include backing up images, transferring images over a network, and working with custom images. If you have created your own Docker image and want to save it for later use or distribution, you can use this command. To save your image, run the Looker Save command and specify the image name, and it's tag and then specify the file name under which you want to save this image. Saving the image takes some time. If you now check our directory, we can see the archive image we just saved. For demonstration purposes, I will delete the image we currently have. I want to delete the image from which we created this archive. We got an error because, of course, we can delete an image while containers based on this image are running. Therefore, we will now stop our containers and delete them. Now we can delete the image. If we check the images now, we don't see our image on the computer. We have deleted it. Now I will load our image again from the archive we saved before. The loading of the file is done using the Docker load command, followed by specifying the archive from which we are going to load it. This takes some time. And here we see that our image has reappeared. We can now do everything we did before. We can start the containers and work as we did previously. This archived image can be transferred to any computer and shared with everyone. You can also use this to export Docker image from the current environment and save them at the archive file to backup or for transferring to other systems. You can find many examples of writing Docker Compose files on the official Docker website. Their documentation on this page, you will find various examples that cover different Docker compose execution scenarios, including simple examples, demonstrating the basic capabilities of Docker Compose. Examples with multi container applications showing how to create and manage applications with multiple containers and examples of integration with other tools such as databases, web servers, cashing systems, and more. It's also worth noting that there are numerous repositories on Github with examples of Docker Compose files for various projects. You can find them using the search on Github and study the code to understand how to best use Docker Compose for your needs. Well, in this video, we have reviewed the most important Docker commands, got acquainted with Docker Compose. Launch several services simultaneously from one image based on Docker compose file and got to know the container management tool Doch. If you still have questions or want to deepen your knowledge, the documentation can always help. I hope today's lesson did not seem too borring or long and that it will be useful in your future work. That's all for today Sia. 12. Bonus: Introduction to Pyenv and Virtualenv: Managing Python Versions and Environments: Very often in reality, you will have to work with multiple versions of Python. This is because each project has its own technology stack and package versions. To avoid creating a mess on your work computer and dealing with conflicts between different versions, it's ideal to use a virtual environment. It's not urgently needed right now, but I suggest you to understand how it works. It will help you a lot. You can skip this part. It will not affect your learning of the Python basics. This will be more necessary when you start working on a project. And now let's get started. Guys, if you want to manage multiple Python versions on your machine, there is a tool PMF. It lets you easily switch between multiple versions of Python and change the global Python versions on your machine. Let's get started from the Macos and then I will show you how it works on Ubuntu. The first step you need to take before installing anything new, it's update. And just in case upgrade to upgrade all packages. The first command updates local repository metadata, the second command, Brew upgrade, upgrades all the installed packages on your system to the latest available versions. It's commonly used practice that users first run Brew update to get latest metadata and then run Brew upgrade to update their installed packages. This ensures that the system has the latest software installed. We go to the Github and follow the instructions. Then we use them Brew install for installing PMF. Just copy this command and execute it. Let's return to the documentation and see what we need to next. Scrolling down, and here we are I use Z SH or shell. It's a common line shell that serves as an alternative to the more commonly known Boss shell. So I copy all this code and post it in SHC file. So we have installed PNF, and now I want to talk with you about Virtual environment. Virtual Environment solves a real problem. You don't have to worry about packages you install on the main system location. PyNVirtoNv is a plugin for the PNF tool that allows user to create and manage virtual environments for Python projects. With this, you can isolate project dependencies more efficiently. So again, follow the instructions and install this plugin. After installation, we copy this command and add this to the SHRC file. In this case, we do it manually. Open the HRC file. It's a hidden file and typically located in the user's home directory. I use simple user friendly text editor nano. You can use VIM. Here we can see three rows of code that was executed when PAN was installed. And pause this command here. I write my comment for the better understanding in the future. Again, for nano text editor, I use the commands Control O and control exit. It allows me write and exit from the text editor. You can use your text editor and execute your commands. Then restart shell with this command. And we can use these tools. So let's check it. Here we can see a small documentation with commands for PNP and PM Vertls. The first command, we check PN version. It displayed the currently active Python version, along with information on how it was set. For now, I don't have any. If I want to now list all Python versions known to PM I use the Command Pimp versions. And for now, I didn't install any Python version with PMP. If I want to see the list of Python version available to be installed, I can use the Command PM install dash list. So let's try to install Python with PM. For this, we use the command PM install, and then I specify the version of Python. I will install another version of Python to demonstrate how you can work in isolated virtual environments with different Python versions. Now at the same time, you will learn how to install and remove Python using PNP. If I now check the versions, we will see several Python versions. The Asterix indicates that right now I'm in a system global environment, but I have two Python versions that I can use for creating new virtual environments for other projects. On this operating system, I have globally, I mean, Python version 3.10 eight. I said globally because for every project, we can have its own Python versions. And now with this command, I will create the first virtual environment. For test project, I use the command Py Virtual ENF. Then I choose the version of Python, and then I can call my virtual environment, whatever it's up to you. I will call it NF and version of Python. And now with the command PN virtual lens, I can see list all existing virtual environments. To activate my newly created virtual environment, I use the command PNF, activate, and then name my virtual environment V 3.90. I immediately can see that I'm in it. If I check the Python version here, it's 3.90, unlike the global version that we tested earlier. If you have several virtual environments and want to switch between them, you can execute the command PM, activate, and then name other virtual environment, even if you write now in another active virtual environment. Now if we install something here, it remains isolated from global environment. Any packages or dependencies installed inside my virtual environment will not affect the system wide Python installation or other virtual environments that we can create. So let's install something here. Let it be Jupiter. I go to the documentation and follow the instructions. Jupiter it's a tool for code execution. I choose Jupiter, for example, it can be any packages or libraries you want. Now with the command PIP freeze, I can see all packages that was installed in my virtual environment. PP is a package manager for Python. Now let's imagine that we don't need this virtual environment anymore. How we can delete it. First, if it's active, we should deactivate it with the command PM deactivate. Then we use the command Virtual delete and name our virtual environment that we want to delete. So when I check Pi on Virtual lens, we don't see our virtual environment anymore. It was deleted along with all packages and libraries that we installed there very useful thing. But it doesn't mean that Python version that we use in this virtual environment was also deleted. If we check Python versions, we still can see several. I added before another one, just to show you how we can uninstall Python versions with BMP, the command, uninstall and then version of Python and Viola, we uninstall Python version 3.9 0.8. With these tool, it's so simple manage different Python versions. Now let's install it on a Bunto. We do the same thing. Go to Github page and follow the instruction. Here, I chose automatic installer. And here I copy this command for installation. Before installation, I use the command pudo Ug update, this command, managing system packages, then psudoUg upgrade. So we update and upgrade all installed packages on our system. Now we can install PAMP Fast this command that we copy previously. Then return to the front page, or documentation, and copy these three lines of code. We will write it in Bahar C file. I is also hidden file. It's very similar that we did with MACOS previously. If you couldn't install Tmp and you got an error, make sure that you installed all dependencies for Python and have Gid on your PC. After all of this, restart shell with the command, and we can use this tool. We use here the same command we used previously on Macros. Let's install Python 3.90. Here we can see our installed Python. Now let's create virtual environment based on this Python version. We activate it the same way as previous with MacOS with the command and activate and then name a virtual environment. Probably you won't encounter this problem, but I have inconvenient behavior on my system. Right now, I don't see that I'm in virtual environment. If I check it, I can see that we have created it. So I had to add these few rows of code in my BachRCFle and everything works. On a Bundu, I also use nanotext Editor. These commands allow me to write and execute from the BachRCFle. Execute the command source BRC. It's for execution the BRC script in the current shell session. And while we fixed it. Right now we are in our virtual environment, and inside, we have the Python version that was used for creation. So, guys, all commands and all the next steps, the same as we did previously. I hope this knowledge will help you see you in the next lesson.