Docker & Kubernetes: Introduction Into the World of DevOps | AMG Inc | Skillshare

Playback Speed


  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Docker & Kubernetes: Introduction Into the World of DevOps

teacher avatar AMG Inc, Technologist

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

31 Lessons (1h 48m)
    • 1. Course Introduction

      3:43
    • 2. 001 Docker Section Overview

      1:17
    • 3. 002 Docker And Its Uses

      6:59
    • 4. 003 What Is DevOps

      2:15
    • 5. Lab 1 Installing Docker (Windows)

      3:44
    • 6. 004 The Docker Engine

      3:12
    • 7. 005 Docker Images

      2:25
    • 8. 006 Docker Containers

      2:38
    • 9. Lab 5 Containerizing A Docker App

      6:12
    • 10. Lab 6 Deploying A Docker App

      6:10
    • 11. Lab 7 Docker Volumes And Persistent Data

      6:27
    • 12. Lab 8 Docker Hub

      5:06
    • 13. 009 Kubernetes Introduction

      1:37
    • 14. 010 Kubernetes And Its Uses

      3:17
    • 15. Lab 9 Kubernetes Installation (Windows)

      1:10
    • 16. 011 The Kubernetes Engine

      3:51
    • 17. Lab 12 Introducing Pods

      3:52
    • 18. Lab 13 The Basics Of YAML

      1:57
    • 19. Lab 14 Deleting Resources

      1:52
    • 20. Lab 15 Organizing Pods Using Labels

      6:08
    • 21. Lab 16 Introduction To Namespaces

      4:01
    • 22. Lab 17 Introduction To ReplicationControllers

      3:37
    • 23. Lab 18 Introduction To ReplicaSets

      6:03
    • 24. Lab 19 Introduction To CronJobs

      4:17
    • 25. Lab 20 Introduction To Services

      3:11
    • 26. Lab 21 Introduction To Kubernetes Volumes

      3:14
    • 27. Lab 22 Managing Pod Computational Resources

      3:28
    • 28. 015 Software Development Principles

      1:48
    • 29. 016 Kubernetes And Docker Best Practices

      1:45
    • 30. Kubernetes Capstone Project

      1:32
    • 31. Docker Capstone Project

      1:31
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

55

Students

--

Projects

About This Class

Learn to utilize the power of DevOps in this Docker & Kubernetes Course

Both Docker and Kubernetes have become an essential part ever since cloud computing came into existence. It is Dockers that helps to create the containers and those are managed in runtime by the Kubernetes. We cover the introductory part for each of them with fundamental concepts covered with use case examples. This course is designed for students who are at their initial stage in learning Cloud Computing and DevOps and best suited for those who want to start their career in the same field.

This course focuses on what Docker and Kubernetes is and how they play a role in DevOps and Cloud Computing as a whole. It also includes Practical Hands-On Lab Exercises which cover a major part in Deploying and Orchestrating Applications.

We use a combination of Jupyter Notebooks, the CMD line /Terminal Interface and Programming to launch any Application of your choice as a Microservice Architecture. The Programming Part mainly includes writing Scripts called YAML Files and Dockerfiles and using Command Line Commands to execute the Scripts and get the results we want. Even if you don’t have any previous experience using any of these technologies, you will still be able to get 100% of the benefit from this course.

Meet Your Teacher

Teacher Profile Image

AMG Inc

Technologist

Teacher

Our company goal is to produce best online courses in Data Science and Cloud Computing technologies. Below is our team of experienced instructors.

Instructor1

Adnan Shaikh has been in the information technology industry for the past 18 years and has worked for the fortune 500 companies in North America. He has held roles from Data Analyst , Systems Analyst, Data Modeler, Data Manager to Data Architect with various technology companies.

He has worked with Oracle , SQL Server, DB2 , MySql and many other relational and non relational databases over the course of his career. He has written multiple data access applications using complex SQL queries and Stored Procedures for critical production systems.

He has a masters degree from Northwestern University of Chica... See full profile

Class Ratings

Expectations Met?
    Exceeded!
  • 0%
  • Yes
  • 0%
  • Somewhat
  • 0%
  • Not really
  • 0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Course Introduction: Welcome to this beginner level course on Docker and Kubernetes and introduction into the world of DevOps. Where our goal is to make the Docker and Kubernetes learning experience easy for everyone. Our focus throughout the course will be two, number one piece with the core concepts of Docker and Kubernetes is number 2. Use the Docker desktop application to understand the practical concepts of Docker and Kubernetes. Number 3, conduct practical hands-on lab exercises that are designed in a way that covers all key concepts related to containerization and deployment. Number 4, test your new found skills via two capstone projects at the end of this course that will help you revise all of your concepts in one place and in one go. And finally, quizzes that are designed to test students learning of key concepts. Meet our team who played a role in researching, designing, and creating this course. We have experienced in practical implementation of different technologies and teaching technical courses at a university level. Our team members specialized in areas related to information technology, software engineering, data science, and more. We will have a total of eight sections that are designed to help the student learn progressively. We will start from the basic introduction of the course and gradually move on to intermediate concepts covering Docker implementation, followed by the Kubernetes architecture and it's best practices. By the end of these lessons, you will be able to deploy your own microservice applications using Docker and Kubernetes. Key concepts throughout the course are explained visually to enable our students grasp the technical concepts quickly and efficiently. The lab portion covers the key concepts of Docker and Kubernetes. A few examples are creating a Docker Hub account and installing the Docker desktop application to your PC, and also uploading images to remote repository in Docker Hub. We will also look at the architecture of the Docker and Kubernetes engines and how they work under the hood. We will also be exploring writing configuration files for Docker and Kubernetes and how you can use the configuration files to containerize, deploy, and manage applications. Then we will learn how to use Docker and Kubernetes, the command line interface, which is where you will be doing most of your work. We will be exploring different resources related to communities, including replica sets, volumes, Cron jobs, and many more. Next, we will learn about some best practices relating to Docker, Kubernetes, and development in general. We will also be covering methods for high availability in case your application shuts down for whatever reason. Finally, we will cover the capstone projects for this course. The capstone projects will include assignments related to the deployment and management of containers, as well as creating different resources in Kubernetes for the application as well. We look forward to having you join our course, and we promised that this course will help you build your foundational skills in learning Docker and Kubernetes. This course will help you make your resume stand out and demand a competitive salary in the marketplace. 2. 001 Docker Section Overview: Hello everyone. Welcome to this Docker and Kubernetes course. I'm lagging decided to join. And I hope this course will help you achieve your goals. To get started, let us have a look at the syllabus there are going to be covering in this course. The darker part of this course has been divided into four sections. The first section is the introduction, which will introduce you to Docker and its uses. We will also be explaining why Docker containers are a better alternative to VMs or virtual machines, and how to install Docker on your computer, whether you use a Windows, macOS, or Linux. The second section introduces you to the Docker architecture, where we will be introducing you to the Docker Engine, which is how Docker runs under the hood. Docker images and also docker containers. Finally, the third section, we'll introduce you to practically implementing Docker concepts like containerizing and deploying a Docker application. Using Docker Hub, Docker security features and much more. Have fun and best of luck learning. 3. 002 Docker And Its Uses: Hello everyone. Welcome to this video. Today, we're going to be learning what Docker is and where it can be used for. Let's start with understanding what Docker is and then we'll move on to discussing what it is used for. Let's get started. Docker is an open source platform that can be used to build, deploy, and manage parts of applications in containers. Deploying each part of an application in its own container is called a microservice architecture. The good thing but Docker is that it is available on all major operating systems like Windows, MacOS, and Linux. Before containers, people used to use virtual machines to test applications. Virtual machines are much more secure than containers, but take a few minutes to deploy. The reason they're more secure is because it's resources are completely isolated from the rest. Washing machines need to be assigned temporary resources that are used while it is running. These resources are called gas resources. They are not accessible to the host machine while the virtual machine is running, which is what makes them so secure, his resources are assigned using hypervisor. Along with the gas resources. A guest OS or a guest operating system also needs to be assigned. Virtual machine can communicate with the host computer. Examples of virtualization software include Oracle, VM VirtualBox, and VMware. Containers were created as a lighter solution to virtual machines. They can be deployed in seconds, but they aren't as secure as virtual machines. This is because they don't use gas resources or a separate guest OS. Instead, all the resources it uses are taken directly from the host OS and only the individual application processes are isolated. Because containers don't need a guest OS or resources. They don't need a hypervisor either. When you compare virtual machines and containers side-by-side. They both are made for completely different situations. Virtual machines are used if you want maximum security for your application and don't care much about deployment speeds. Containers are used if you want maximum speed and find pastas isolation enough. It all comes down to what your application priorities are and what you're willing to sacrifice for something else. Although we compared virtual machines and containers quite thoroughly, we still haven't mentioned another advantage which containers have called the microservice architecture. A monolithic architecture is an application that runs as a whole, meaning that all parts and processes of the application are bound together. If a part of the application goes down, the whole application goes down. If you want to update something, you have to bring down the whole application to do so. And finally, if you want to replicate your application for high availability, it will cause you double the amount of resources to do so. On the other hand, we have the microservice architecture. A microservice architecture keeps all the parts of the application separate, where they can still communicate with each other. If you want to update something, you can take that specific application part down. Updated and bring it back up without disturbing any other parts of the application. If you wanted to replicate a part of the application for high availability, you could easily do that while consuming as little resources as possible. Microservice architectures have many benefits, which is why a lot of companies prefer to use them instead of a monolithic architecture. Nothing comes without flaws, however, which is why we'll explore some disadvantages of the microservice architecture as well. For now, let us explore three benefits that the microservice architecture has to offer. The first benefit, if a part of your application crashes, you can take that specific part down and repair it, but are deserving any other part of the application. You can also put backups of each application part so that if apart goes down, you can just use a backup instead. Let's say my application's database went down or I need to add a new feature while the component is down, the backup database component can work in this place instead and to the original is back online. The second benefit is that if you want to add a new feature or a grade to new technology stack, you can upgrade that individual part of the application and you don't need to change anything else. Let's say I wanted to change my website front-end from the mean stack to stack. I don't need to change any other part of the application to do this. And I can let my front-end components backup do all the work while the original is offline. Was the original is updated. I can bring it back online and update the backup with the new features that are added. This whole process, they didn't require me to touch the other components. What so this whole process didn't require me to touch the other components whatsoever. The last benefit we mentioned is the ability to scale any other part of the application at appropriate times as opposed to the whole application. Let's say I wanted to scale my database vertically because we're getting too many visitors on our website. I can easily allocate the extra resources required and don't even need to take it down. I can also decrease the number of resources was the traffic goes down. Now that I've discussed some advantages of the microservice architecture, let's discuss some disadvantages as well. The first disadvantage is the actual communication between the different parts of the application. When you have so many different parts, it gets very hard to manage, orchestrate, and then put them together. The second disadvantage is the increased resource consumption when running a microservice architecture. Because there are so many different parts, the application each way more resources than it would normally do with a single part application. A single-part application might take 16 gigabytes of RAM trend as a whole. But if it's in a microscope, a single-part application might take 16 gigabytes of RAM to run as a whole. But if it's in a microservice architecture, each part would take at least two gigabytes. If there are more than eight parts, it would use up more RAM than using a single party application. This applies to all parts of the application. Let the CPU and databases notice the ramp. The final disadvantage will mention which debugging. This might seem to contradict what I said before about being able to upgrade and take down parts of the application with these. But in reality, debugging is actually much more difficult than it looks. The reason for this is because each individual process has its own laws to look through. The more parts, the more logs. Hence, it's much harder to pinpoint the actual issue. I hope this video helped you understand what Docker is and why it is used. Thank you for watching and I'll see you in the next video. 4. 003 What Is DevOps: Hello everyone. Welcome to this video. Today, we're going to be discussing what DevOps is and why it's better than keeping Dev and Ops separate. If you don't know what DevOps is, we will be discussing that in the video as well. So let's start with the definitions first. Deaf or development. Firstly, the development of an application. This could be a web app, backend application, or cloud service. You can develop any application in many programming languages such as Python, JavaScript, and C plus plus. Ops or operations, refers to the operation and managing of an application, which includes deploying, orchestrating, networking, and any other things that are required to keep the application up and running. You can create and run operational scripts using scripting languages like Bash. You can also use networking software may for a specific task. In this case, you can use Docker and Kubernetes for creating and managing containers. Dev and ops are managed completely separately from each other. The people working on operations don't know anything about how the application itself works. And the people working on development don't know anything about the management and networking of the application. This isn't the most ideal way to do things because it's very hard to debug, resolve issues and improve the application overall, if there are two separate teams doing their own thing and don't know much about each other's work. The solution, DevOps. Devops is the practice of one team managing the development and operations of an application. This seems like a lot for one team to manage, which is why the operation side of things has been simplified tremendously using products like Docker and Kubernetes. This enables developers to focus on what's really important, which is developing, debugging, and improving applications. I hope this video helped you understand where DevOps is such an effective development practice in the IT world today, and how Docker and Kubernetes fits into the picture. Thank you for watching and I'll see you in the next video. 5. Lab 1 Installing Docker (Windows): Hello everyone. Welcome to this video. Today we are going to be installing Docker Desktop on our Windows machine. First of all, navigate to this link. Docs dot docker.com slash Docker for Windows Lesson stall. Here you can see a button that says Download from Docker Hub. Click on it. This will take you to a new page. Here. There's a button that says get darker. This will start installing the Docker Desktop installer. Keep in mind that this file is about 500 megabytes. So make sure you have sufficient storage. I have already installed the installer, so I'll skip that step. Next, let's go to our downloads folder. Or here you can see that there is a Docker Desktop installer in my directory. Now, I will double-click the EXE file. This opens up a configuration menu. This menu contains two options. The first option is mandatory, or the second option is up to you. The reason the first option is mandatory is because this is also a thing called WS O2 on your computer. This enables you to run Linux commands and processes on your Windows machine. After you've selected your options, click on, Okay. Here you can see that the installation has begun. Once it's finished, you restart your computer for Docker Desktop toward properly. As you can see here, the installation has now finished. Now is simply start a computer. Here. You can see that I've just restarted my computer. In most cases, Docker Desktop runs automatically when the computer starts up. If it doesn't, just type Docker Desktop in the search bar and press Enter. You might be getting an error that says WSL 2 is not installed properly. It didn't happen in my case, where it might happen to you. If you get this error, then don't skip ahead. If you don't get this error, skip ahead to the Docker Desktop tutorial. To solve this issue, we need to navigate to this website. Docs dot Microsoft.com slash us slash Windows says WSL. So that's installed when ten. Over here, scroll down to step number 4. Click on this link over here. Wsl 2 Linux kernel update package for x 64 machines. This will download a follower here. Let me click on it. This is the WSL 2 installer. I already have it installed, so install automatically. Click on Finish to exit the setup wizard. Now that we've installed the BSL2, let's restart our computer. If you're running Docker Desktop now, we'll see that it is running perfectly. Now here, you usually have the option of taking a tutorial to see how Docker Desktop works. You can skip it if you like. Here, you can see that we are faced with a GUI. This GUI is where you can, you are running, stopped and paused containers. We learn more about this later. That's it for this video, you now have Docker Desktop working properly on your Windows machine. Thank you so much for watching and I'll see you in the next video. 6. 004 The Docker Engine: Hello everyone. Welcome to this video. Today, we're going to be discussing what the Docker Engine is and how it works under the hood. Let's get started. The Docker Engine is the core software that runs and manages containers. It is designed to be modular with many swappable components. There are three main components which make up the Docker architecture. First of all, we have the client, which is us. Whenever we give Docker command, we send that command to the Docker diamond, which is responsible for processing our request. This Docker diamond is contained inside the Docker host, which is the second component of our architecture. The Docker host is the main heavy lifting here, which is responsible for processing all of our requests for creating images and containers. It is also able to access what we call a resistor, which brings us to our third component. The resistor is where you can store all of your images in the Cloud. The most commonly history is Docker Hub. We will learn how to use Docker Hub in a later video. For now, let's take three example commands and understand how each of them works in the Docker architecture lesson with the docker build command. The docker build command is responsible for creating an image. When you give this command to the client with the appropriate arguments, it processes this command and creates your image using the docker file you supplied with. If your image has dependencies that aren't available on your local computer, it may optionally contact the resistor to pull those dependencies as well. Let's have a look at the Docker pull command next. The Docker pull command is responsible for pulling images from a remote repository like Docker Hub. First of all, you give your command with the Docker diamond for processing. Then it has a look at the eight. Then it has a look at the district. Then it hasn't looked at the position to find the image that you requested for. In case you do not have permission to pull a specific image, this command won't work. Was it has found the image and confirm you how the privileges to pull it, the image will be pulled onto your local computer. Finally, let's have a look at the docker run command. The docker run command is responsible for turning your image into a container. When you specify the command to the Docker diamond, it has look at your local images to find the image that is specified. If that image is then not found, it will contact the repository to find the image there. And again, if you don't have the permission to access the image, the command won't work. But if you have permission and the images we found, it will be pulled on your local computer, then your image will be turned into a container. I hope this video helped you understand how the Docker engine works under the hood. Thank you for watching and I'll see you in the next video. 7. 005 Docker Images: Hello everyone. Welcome to this video. Today, we're going to be discussing what Docker images are, how they are created, and how they are used to create Docker containers. Let's get started. You can think of an image like a container template or a stock container. We are used to continue replication in an easily portable manner. Once you have an image ready, you can run it and it will deploy as a Docker container. You can create your own image using a Dockerfile, which we'll be discussing in later videos. You can also create dependencies for your image by adding other images into your Docker file. Images are also very lightweight because they're intended to be used only for a single application or service. Under the hood. Images are made up of multiple layers that gets stacked on top of each other to represent a single object. These individual layers are all read-only and cannot be modified. Each layer is also given a unique ID, SHA 256 hash. In case you didn't know I hash is the output of a hashing algorithm which is used to encrypt data. In case you didn't know a hash is the output of a hashing algorithm which is used to encrypt data. And shot 256 is one of the most popular hashing algorithms used today. You can assign a name and multiple tags to an image. Image tags are used to convey important information about a specific image version or variant. While image names are pretty self-explanatory, and image tag can be anything you like. The most common tag is latest. Using the latest tech or an image indicates that it is the most recent version of the image. This tag is also the default for an image. If you don't mention attack explicitly. Images can only be made for a specific architecture. This is why multi architectural images were introduced. Wealthy architecture images can run on any computer regardless of the architectural. Images can only be made for a specific architecture. This is why multi architectural images were introduced. Multi architectural images can run on any computer regardless of the architecture, whether it's ARM, Intel or AMD. I hope this video helped you understand what images are and how they are used to create containers. Thank you for watching and I'll see you in the next video. 8. 006 Docker Containers: Hello everyone. Welcome to this video. Today, we are going to be discussing how Docker containers work in-depth. Let's get started. We discussed in the previous video that images are like container templates. Well, a container is the runtime instance of an image, meaning that an image turns into a container when it is run. Docker containers can be in either of three states. The first is a running container, the second is a pause container, and the third is a stock container. A running container is an active container which is currently carrying out the processes that is intended to a PaaS container is an active container that has been stopped temporarily. Finally, a stop container is a container that is no longer being used and is not carrying out the process that is intended to. We've discussed quite a lot about starting containers and managing them in three different states. Now, let's talk a little bit. Now, let's talk a little about stopping containers and the two different ways to do so. The first is sick term, which does the container finish up its processes and is then stopped. The second is sick kill, which doesn't give the container anytime to finish what it's doing and stopped immediately. Seek term should only be used when you want to stop a container. Normally will see kill should only be used when you need to stop a container immediately. Whether that's because of an error or an infinite loop inside the container. Finally, let's talk about restart policies. Restart policies are applied to each individual container. They're responsible for doing specific tasks in case a process inside the container fails. It's kind of like a self-healing method for containers. The first one is always, which means that whenever an error occurs inside the container, it'll always be restarted. The second one is unless stopped, which means that if the user explicitly stops the container, it will restart the container. And in all other circumstances, it won't restart the container. The last one is unfilled, which means that of the container exit with an exit code of 0, it would restart the container. It will also restart the container. If the diamond is restarted with in all other circumstances, the container won't be restarted. I hope this video gives you a little more insight into how containers work. Thank you for watching, and I'll see you in the next video. 9. Lab 5 Containerizing A Docker App: Hello everyone. Welcome to this video. Today we are going to be containerizing our first Docker application. What this means is that we are going to be creating the image that we will be deploying as a container in the next video. This is all going to be done from a pre-created web application, which you can access in the description below. Now, let's get started. First of all, you have to download the folder. In this folder, Let's have a look at the Docker file and what it contains. Let's open our Docker file using a text editor like Notepad Plus, Plus. If you're on Windows, TextEdit, if you run macOS or nano, if you're on Linux. You can see that we have many lines here. So let's get started deciphering them one by one. The first line here is the from keyword. All Docker file lines start with the from keyword. All Docker files start with the from keyword. It is responsible for adding a base image, in this case alpine in the start directory as dependency, which we can talk on as we go further. Then we have the label keyword, the label keyword as a key and a value, which can both be decided by the file author that can be used to store metadata. In this case, our metadata is the creator of this file. We have a keyword called name and its value inside quotation marks is my name. Then we have the work do keyword. The work do keyword sets your default working directory inside the container. It works by specifying a directory in the container to set as your current working directory. In this case, I commented out this line because all of the applications that we're going to be running are already executing in the root directory. Because they're exiting in the, because they're being executed in the root directory. We don't need to specify any additional directories. Next, we have the run keyword. Anything that's written offer the rum keyword on the same line will be treated as a command that will run was the container has been deployed. In this case, we are installing an upgrading PIP for Python. Offer that we have the copy keyword. The copy keyword copies files from your local computer to the container just before the container is being deployed. It is a two port command where the first part, in this case, it's this part, is the location on your computer. And the second part, in this case, this part is the location on the container itself. Next, we have the ex-post keyword. The exposed keyword is responsible for setting a network communication port for the application running inside the container. So it is not completely isolated and it can still access what it means to work properly. In this case, we are exposing port number 5000 for the web server. Finally, we have the entry point and cmd keywords. The entry point keyword is used to decide the entry point of your application. This is the first program or process that runs when your computer is up and running. This is the first programs There's process runs when your container is up and running the arguments of the command or decided through another keyword specified after the entry point keyword, which is called CMD. Now, these are all the basic Docker file keywords that you need to know to start making Docker files. Now, let's run the Docker file into an image that we can deploy as a container later on. This is called building an image and can be done with the Docker image build command. Let's go to the terminal. This is the syntax for the command docker image. Build the T flag, followed by the image tag. In this case, let's give it a name of Docker app, followed by the latest tag, and then a dot. Let's decipher this. First of all, the T flag is used to specify an image tag. We haven't discussed tags yet, but we will do in further videos for now. All you need to understand is that the latest tag means that this is the latest version of the image currently being used. Then we have this dot, which basically says that the image is going to be built from the files in the current directory. As you can see, I am currently in the flask app directory, so I don't need to specify a directory. Hence, I can just put a dot. Now if I click on Enter, you can see that the image is now building and now it has finished. If I haven't looked at my images using the docker image ls command. You can see I have an image called Docker app that has just been created. I hope this video helped you understand what a Docker file is and how it helps in deploying apps to Docker, as well as the basic keyword instructions use inside Docker files to make them work properly. In the next video, we'll be going over how to deploy an image as a running container. Thank you for watching, and I'll see you in the next video. 10. Lab 6 Deploying A Docker App: Hello everyone. Welcome to this video. Today, we are going to be deploying the image that we created in the previous video. Let us get started right away. There are a range of commands that can be used for container management, like running, stopping, and restarting containers. The first command will explore is docker run. This command creates and starts your container from the image that you provide it with. You can either do Kurt and executable mode or detached mode. Detached mode means that run in the background constantly. Once it is deployed and executable mode is it will automatically be able to interact with the container once it is deployed. Exiting Executable Mode using control C will kill the container. That is why for now, we'll just create a detached container. Here's the syntax to create a detached container. Docker run, followed by the D flag, followed by the name flag, followed by the name of the container. We are calling it Docker app with a capital a. And then the image name, which is also Docker app, but with a small. So let's run this command. And as you can see, the container has now been run. After this. We can use this command over here, docker container ls to outfit all of your running containers. So let's have a look at that as well. As you can see, this is our container. Each container is given a unique container ID. So this is the containers ID. Then we have the image that the container is using. In this case is the Docker image. Then you have the current command that is executing. It's a Python command. Then you have, when it was created 15 seconds ago, you have the last status. In this case, the containers started up and running 30 seconds ago. Then you have the ports that is communicating with. This is a web server is communicating through port number 5000 using the TCP protocol. Finally, we have the name of the container itself, which is Docker app. Let's move on to some other commands. This right here is the docker push command. Docker ps command is you should pause or any container. Here's the command, Docker, Docker app, or in other words, the name of the container. Let's run this command. As you can see, it just outputted the containers name. This is normal for all of the commands that we're going to be viewing. Then let's output the container again. And as you can see, sadness is currently paused, as you can see in brackets. Now, Let's unpause the container. Again. The command docker and pause is used to unpause a post container. Pretty self-explanatory. So this one, this command, Docker unpause me with the container Docker app. If we output the container status, you can see that it is now running. Then we have the docker stop command, which completely stops a container. So if we execute this command over here, you can see that it is going to output the same Docker app or container name. You can see it's taking a while because we discussed about Sikh term and seek kill previously. So the docker stop command actually executes a Sikh term. And the docker kill command executes a SIGKILL. So the docker stop command takes a while to stop because sick term wait a few seconds for the container to finish its processes. And then because the container, we have looked at the output, the reason we have this effect at the end over here is because this is now a stopped container. If you do this by default, you can see that it won't show anything. And if we use the flag, it will use, it will output all containers, including all the Pause and Stop containers. So you can see here that this container has recently exited exactly 36, 32 seconds ago. Now, we can start the container again using the docker stop command. Instead of doing the whole docker run command again, that's also an option. So let's just use this command here, docker start Docker app. As you can see, it worked. Then if we output normally, you can see that it's up and running. So two seconds ago. Now, let's have a look at the docker kill command. This will stop the container immediately because it's a SIGKILL and ATSIC term. So if you run this command here, you can see are pushed anywhere the container Docker app. Then we have the output is have a look at that as well. Again, we use the a output because this stops the container and you can see it exited seven seconds ago. Finally, we have the docker remove command. So once you've stopped a container, whether it's through docker kill or docker stop. You can remove the container itself completely. And you can do that using the docker remove command or Docker RM in this case. And then you specify the continuous name. So let's give that a go as well. As you can see, here's the output. And then if we have a look at the normal output, you can see that nothing's outputted. And if we have a look at the output with the flag as well, you can see that there's still nothing in the output either. Again, the reason for this is because we remove the container completely. So Docker doesn't know that it exists anymore. It's just completely removed and Docker doesn't really care about it anymore. So this was an explanation of what Docker containers are and how you can deploy them. I hope this video helped you understand how to deploy an image as a container and how to manage the container in different ways as well. Thank you for watching, and I'll see you in the next video. 11. Lab 7 Docker Volumes And Persistent Data: Hello everyone. Welcome to this video. Today, we are going to be setting up volumes to store persistent data in a container. Why do we need volumes in the first place? Well, if we have a container that works on and stores important information, and it quits because of an internal error. For some reason, all of that data that was working on will be lost forever. When a container is you started. To prevent this, we can use volumes, their store precision data to ensure that when the container starts back up again, the verb data is safe and sell. Keep in mind that Docker containers. Keep in mind that Docker volumes and Kubernetes volumes are very different from each other. Docker volumes work on individual containers. Well, Kubernetes volumes work on a group of containers together in a single pod. We learn what pods are later on in the course. For now, let's create our first Docker for you. Here's the command. We'll use, docker, volume, Create, and then the name of the volume. Let's call it my new volume. Let's run the command. You can use. Here's the output. My new volume. As with the Docker commands previously, it just outputs the name of the volume. Now let's have a look at the Docker volume ADS command. This command basically just outputs all of the volumes you have available. As you can see here, we have a local volume that's called my new volume. Now, let's mount a container with the volume. We can do this by running a container and use it the mount flag to mount a volume with the container. It's a two-part command. First of all, you have the mount flag and you have the source keyword. The source keyword basically indicates what volume the container is going to be attached to you. Then we'll set a separate. Then we'll separate the two arguments with a comma. And then we have the second argument, which is target equals slash wall. So slash wall is the default directory that's created with every volume where you can store data. You can kind of think of it as a root directory, for example. Then you have the image name as normal. Let's execute this command. And as you can see, it outputs the hash, meaning that the Docker container is running perfectly. Now, let's explore the Docker volume prune command. This command basically deletes all of the volumes that are not mounted to container. How are you going to test this? Well, let's create another volume. But we won't attach this one to a Docker container. So that's run this command, the volumes called volume to be deleted. If we have a look at the volume ls command, you can see that we now have two volumes right here. If we execute this command, Docker volume, prune, it should only delete the volume to be deleted volume and not the my new volume for you. Hope that's not too confusing because of the names. Let's execute the command. As you can see, it's stuck right now. The reason this is happening is because this command output something on the command line. So we need to have access to the command line to take yes or no for the command. Let's have a look at what I mean. If I go to the command prompt and I enter Docker volume prune, you can see it gives me this warning. Warning. This will remove all local volumes that are not used by at least one container. Or usually you want to continue. You can write a y to say yes and a capital N to say no. What this says all here, not used by at least one container, indicates that yes, you can use a single volume with multiple containers. That's pretty interesting. Now let's click on Y. And as you can see, deleted volumes, volume to be deleted, reclaim space 0 bytes because we don't have any actual data inside the volume. So the volume is now deleted. Let's have a look at their volumes now. And as you can see, this over here is still stuck. Once you have a stack command inside Jupyter Notebook, you need to restart your kernel. I'll do read and second. As you can see, I have reset my Jupyter Notebook and now I can run the commands property. Let's have a look at my volumes. And as you can see, I only have the my new volume for volume left. Finally, we have the Docker volume remove command. This command is used to remove a volume normally even if it's attached to a container. The problem with this command here is I just tried to run it normally. You can see it will give an error from the diamond. And it basically means that you can't delete a volume if it's attached to a Docker container. So first of all, we're going to have to kill and remove the Docker container. Let's call the Docker container first. Was this is done. Let's remove it. Finally, let's move the volume. And now if you have a look at their available volumes, you will see that there are none left. I hope this video helped you understand what volumes are and how they are used to show possession data in Docker. Thank you for watching and I'll see you in the next video. 12. Lab 8 Docker Hub: Hello everyone. Welcome to this video. Today, we're going to be learning what Docker Hub is and why it is so useful. First of all, let's navigate to this website, hub dot docker.com. Let's start with making an account. I already have one made, so I'll just login directly. Keep in mind that when he created a Docker account and whenever you sign in, whether it's from the command line or the website, you can not use an evil to login and you have to use your username at all times. Now, go to the repository section. This is a Docker Hub homepage. If you haven't figured it out already. Docker Hub is used to store images in the Cloud. You will still have access to your images and can pull them from the registry at anytime, whether it's using the website or the command line says terminal. You can also push images in the same manner. Docker Hub also allows you to upload multiple versions of an image to the registry if you like, by giving it a different tag. Pushing and pulling from the website is pretty straightforward. So let's try to do the same using the command line slash terminal. Instead. Let's use the same alpine image we used in the previous videos. First of all, let's run the Docker login command. As you can see, it automatically authenticated with the existing credentials from logging into the website and our Docker Hub account. Next, let's tagore image. The reason we need to do this is because when we need to upload an image to Docker Hub, we need to tag it properly. We can do this using the docker tag account. We can do this using the Docker tag command. Here's the syntax docker tag and then the image name. In this case, it's alpine. The Docker have username, which is Lord. Our Docker Hub username, which is whatever it is. Then a slash, then our image name, again, alpine, then our colon, and then our tagging system, whether it's a V1, V2, or V3, or it's a default latest style. I'll just go ahead and use that. As you can see, the image has now been tagged. So if you had looked, so if you have a look at our existing images, docker image ls, you will see that we have a mean large 76 slash alpine image. Now we're going to be uploading this image to Docker Hub. We can do this using the Docker pull command. We can do this using the docker push command, docker push. And then in the name of the image, docker, push, and then the name of the image, which is this right here. As you can see, this is currently uploading to the repository. Now it has been uploaded. Let's go to the website. And if I refresh, you can see that we now have the mean lord 76 alpine image on a repository. Thank you for watching. As you can see, you now have a repository with the image name. Let's try putting the image instead. We can do that using the Docker pull command, docker pull, and then the name of our image. If I have look Docker image ls, it didn't update anything because we already have the image in a local repository. But if we didn't, it will be added. Anyways. I hope this video helped you understand what Docker Hub is and what plays an important role in the Docker life-cycle. Thank you for watching and I'll see you in the next video. 13. 009 Kubernetes Introduction: Hello everyone, Welcome to the Kubernetes part of this course. I hope you learned a lot so far and are realizing the importance of DevOps, along the way. To get started, let's have a look at the syllabus that we're gonna be covering in the Kubernetes part of this course. The Kubernetes part of this course has been divided into four sections. The first section is the introduction, which will introduce you to who wind eddies and its uses. We will also be explaining you where Kubernetes is a must-have when working with Docker containers and how to install Kubernetes on your computer, whether you using Windows, macOS, or Linux. The second section and introduce you to the Kubernetes Engine and how everything works under the hood. The third section introduces you to Kubernetes resources like pots, and how you can create your very own Kubernetes resources using YAML. It also introduces you to the resources and organising ports using labels. We also have replica sets which are a continuation to replication controllers. They both are very important for high availability in any microservice architecture. We also introduce you to resource management and how you can limit the amount of resources kubernetes resource uses. Finally, the fourth section, we'll introduce you. Kubernetes best practices. These include testing and development, best practices at management and client request handling. Have fun and best of luck learning. 14. 010 Kubernetes And Its Uses: Hello everyone. Welcome to this video. Today, we are going to be learning what Kubernetes is and what is uses are. First of all, let's understand what Kubernetes actually is. Kubernetes is an open source platform that can be used to manage, orchestrate and network containers together. Containers are grouped together in pots, which are the smallest computational units in Kubernetes. Using kubernetes in your DevOps architecture has many benefits. Let's have a look at three benefits that Kubernetes has to offer. The first benefit is that it is compatible with multiple CSPs. These include AWS, Azure, GCP, and OCI. If you don't know what CSPs are, they are cloud service providers. They provide different services like CPU and GPU power, storage resources, application hosting, and much more. All of these services can be accessed remotely. If you have an internet connection. Kubernetes and Docker can both be used with CSPs if you want to host your application in a microservice architecture. The second benefit is that kubernetes provides a diverse range of storage options. We will cover Kubernetes volumes later on in the course. But all you need to know now is that there are many different storage options you can take advantage of in Kubernetes. Some of these include empty. Their GCE persistent disk is with the GCP cloud provider host path and many more. The final benefit is that Kubernetes can work with any application that you wish to deploy. The reason for this is because Kubernetes doesn't care about the type of application that you want to host, where there is a web application, database, or back-end process. Kubernetes also doesn't care about the programming languages that you use either, which is a bonus if you want to update your application later on to a different framework. For example. Now that we have discussed some advantages about Kubernetes, let's discuss some disadvantages as well. The first disadvantage is that Kubernetes deployment can be very expensive. We discussed previously that hosting an application using a microservice architecture can be very expensive resource wise. In case you decide to hospital microservice application using a Cloud service, you can still expect a pretty hefty price for your application. The second disadvantage is that Kubernetes is overkill for small applications. If you have an application made up of three services, the front-end, back-end, and the database. You don't really need Kubernetes at all. Human and this is meant to be used for bigger microservice applications. Have a lot of parts, as it is, an orchestration is the last managing to you for making it easier to manage complex microservice architectures. If your application only has a few parts, you can use Docker to deploy the application in containers and then we'll work out just fine. The final disadvantage is that Kubernetes can be quite complex to use. While it is much better than managing Docker containers normally, it's still quite a hassle to manage everything properly. Especially if you have a complex microservice architecture with multiple backups, parts, and interconnected networks. I hope this video helped you understand what Kubernetes is and what did you use for. Thank you for watching and I'll see you in the next video. 15. Lab 9 Kubernetes Installation (Windows) : Hello everyone. Welcome to this video. Today, we're going to be installing Kubernetes on a Windows computer. First of all, navigate to your Docker Desktop dashboard. Then click on the Settings icon in the top right. Then let's have a look at the tabs over here. Go to the bottom tab which says Kubernetes. Then click on the checkbox that says Enable Kubernetes. Then click on Apply and restart. Click on install. You need an Internet connection. And as you can see, coordinators is now installing. Confirm that we're having a look at the orange icon at the bottom left that is currently orange. As you can see, kubernetes has now been successfully installed. You can confirm whether it's successfully installed or not. But having a look at this icon in the bottom left, if it's green, it means the communities is now installed. I hope this video helped you understand how to install Kubernetes on your Windows computer. Thank you for watching, and I'll see you in the next video. 16. 011 The Kubernetes Engine: Hello everyone. Welcome to this video. Today, we're going to be learning about how Kubernetes works under the hood. Let's get started. It is important to understand that the Kubernetes Engine, which is how Kubernetes works under the hood, is divided into two sections. The first section is called the control plane or master node, and the second section is called the nodes and worker nodes. To be precise, the master node is what controls a Kubernetes cluster and makes IT function. The worker nodes are machines that Karina containerized applications. They're responsible for running, monitoring, and providing resources or services to your application. Now that we have a basic idea of the two sections that make up the Kubernetes Engine. Let's dive deeper into what components make up each section. Let's start with the Muslim node. The first component we'll discuss is the ETC. Component. It is a distributed data store that persistently store the configuration data of the cluster in case the cluster goes down or is deactivated temporarily, was it convex online, the configuration will still be remembered by the Kubernetes Engine. You can try this out for yourself as well. Navigate to your Docker Desktop dashboard and click on Settings. Then click on the Kubernetes section and uncheck enabled Kubernetes checkbox. Then click on Apply and save changes. Once Kubernetes is disabled, re-enable it by taking the chatbox and saving changes. Again, you will notice that all of your Kubernetes resources are still available and nothing has changed. Next, we have the API server. The API server is how the client, which is you and the other master nodes as control-plane components communicate together. The API server also enables you to communicate with the different worker nodes on your cluster and their components as well. Then we have the scheduler, the scheduler schedules or apps. The scheduler schedules your apps, meaning that it assigns a worker node to each part of your application. This enables resource provisioning, management, and monitoring for each part of your application individually in order to create an efficient microservice architecture for your application. Finally, we have the controller manager. The controller manager performs cluster level functions like replicating components, using replication controllers and replica sets, keeping track of all the worker nodes in a cluster, handling failures and crashes, et cetera. Now, let's move on to the worker node. The first component we'll discuss is the kubelet component. The kubelet is responsible for communicating with the API server on the master node has control plane and managing Containers on its node. It directly communicates with the Container Runtime, which is also responsible for running your containers. It directly communicate with the Container Runtime, which is responsible for running your containers. There are actually many container runtimes that we can use, which we discussed in a previous video as well. The two most popular options are Docker and our KT. Finally, we have the Q proxy or community service proxy, the proxy load balances. Finally, we have the Q proxy or Kubernetes service proxy. The Q proxy load balancers network traffic between application components. It is pretty much like a Kubernetes service resource, which assigns a static IP address which clients can use to communicate with multiple replicas of an application part, like the front-end, for example. I hope this video helped you understand how Kubernetes works under the hood. Thank you for watching and I'll see you in the next video. 17. Lab 12 Introducing Pods: Hello everyone. Welcome to this video. Today, we're going to be exploring what parts are and why we use them. We will also be learning how to set up your own pots. Let's get started. As mentioned before in a previous video, pods are the smallest computation resource in Kubernetes. Pods do not do the computation themselves, but are just a managerial resource for the container is contained inside the pods. We use pots to group together containers that perform similar processes. Well, how about putting all of our processes into a single container? There are many downsides to this, because containers are made to contain individual processes rather than the whole application or a part of the application like the backend, that'll make it a monolithic architecture. This makes debugging and networking extremely difficult and defies the purpose of a microservice architecture in the first place, which is high availability. The more processes inside a container, the more processes are not available if the container goes offline. Cloning the containers for high availability is inadvisable either because it would mean cloning the whole application. And that would use up twice as many resources on the server, which is in good. This is why parts had been designed to group together containers with similar or interconnected processes and enable networking between those containers as well. Let's say your backend application had three parts. You can group them together in a single pod. Now that you understand what pods are, an oil, use them. Let us create our own pod. To create a pod, you need to create a file called a YAML file. Why HTML? This is similar to a Docker file, but it has a wide variety of uses as well. And it's used for describing how a pod is going to be created. We will explain the different parts of a YAML file in the next video. But for now, let's copy and paste the code into a notepad, like text edit for macOS, Notepad Plus Plus for Windows and Nano for Linux. Change your Docker Hub username, or here to your actual Docker Hub username. Now save the file with the name example WMO. And then let's create the part. If I head over to my terminal, I can see here that here's my terminal. Keep in mind that the command that we're going to be using can be used for creating any other resource in Kubernetes and not just pot, you should get a message that says that the port has been created successfully. If it is created successfully, here's the command Cube CTL, Create the f flag, followed by the name of the YAML file example and then the extension dot YAML. And as you can see, the port has now been created. If I have a look at coop CTL, get pods, you can see that the pod is running. You can also output the YAML of an existing part by using this command, Cube CTL, get PO, short for pods, and then the pod name example, and then the o flag followed by yellow. And here is the YAML of the court. It's huge, but you can also output it in another way. Let's run the same command, except let's replace the yellow at the end with JSON. And then we can output the pod as a JSON file. I hope this video helped you understand what pods are and how to create them. Thank you for watching, and I'll see you in the next video. 18. Lab 13 The Basics Of YAML: Hello everyone, Welcome to this video. Today we're going to be learning the basics of YAML kubernetes. Before we start, it's important to mention that unlike docker files, you won't be able to create any Kubernetes resource that you wish after this video. The reason for this is because there are many cognitive resources and each has their own sections that you need to add or remove to create it. Without further ado. Let's get started. Yaml files are divided into a series of sections. Each section has a different purpose, and while each covalent is resource has its own new sections, there are a few general sections that each and all YAML files must have. The first is the API Version section, which you can see over here. This specifies the version of the Kubernetes API that is to be used. The most common version is V1, which is also called Version one. The second is the kind section. The section specifies the type of Kubernetes resource that you are creating. In this case, we created a part in the last video. The third is the metadata section. The metadata section contains the metadata of the Kubernetes resource, like its name and labels. The fourth and final section is the spec section, which contains the technical specifications of the community resource that we are creating. The contents of this section are what differs the most from different communities resources. In this case, we are specifying the container that we will be creating with the part including its image, name, and ports. I hope this video gave you a general understanding of the foremost important YAML sections used with all Kubernetes resources. Thank you for watching and I'll see you in the next video. 19. Lab 14 Deleting Resources: Hello everyone. Welcome to this video. Today, we are going to be learning how to delete resources in Kubernetes. Let's get started. First of all, let's start with learning how to delete pods. Deleting any resource in Kubernetes requires you to use the syntax Cube CTL, delete, followed by the resource type, followed by the resource name. Let's show the linear part. If you have a look at our running pods, Cube, CTL, get pods, we have the example part. How can we lead this part? We can do this using the same coffee table delete, command, Cube, CTL, delete, followed by part, which is the resource type, followed by the pods name, example, Cube, CTL, delete pod example. And as you can see it says pod example deleted. And as you can see, my CMV is currently stuck. This is because it's sending a signal term to kill the port. A sink term, as we discussed, means that the container takes up to 30 seconds to finish. And if the container is don't finish up in time, then a SIGKILL is sent to forcefully shut them down. And as you can see, the port has now been deleted. I hope this video helped you understand how to the resources in Kubernetes, including pods and other resources that we will cover in upcoming videos. Keep in mind that you can use the Cube CTL delete command with any resource that you would like to delete equivalent, eddies, thank you for watching and I will see you in the next video. 20. Lab 15 Organizing Pods Using Labels: Hello everyone. Welcome to this video. Today, we're going to be discussing what levels are and how they can be used to organize your Kubernetes resources. Let's get started. Labels can be used to organize posts and also other communities resources. It is an arbitrary key value pair that you attach to a resource. You can use a label selector to filter our resources based on given labels. You can also add multiple labels. If you want to add a label in a resources YAML file, you add a label section in the metadata section, I'll use the same YAML file for the part that we created in the first video of this section, I will add a label section like this. Labels, followed by the first. I will add a label section like this. First of all, we have the heading labels followed by the first key value pair creation method and then the value manual. And then another key-value pair, ENV or environment, and then the value production. The creation method, key-value pair specifies the creation method of this poll, whether it was manually done automatically, then the ENV key-value pair specify the environment of the board, whether it's placed in production, development, testing, or any other namespace the company may be using. Again, you can create any key value pair that you like. We can just define the value as proud for short as well, which means production. Now, let's create the pod. Will use the same command that we used for Cube CTL. Create the flag. Oh, forgot to save. There we go. Cube CTL create F, and then the name of the YAML file example, and then the extension dot YAML. And as you can see, the port has now been created. Now if you run the command Cube CTL get pods, we can see the pod is running, but where the labels, we can see any labels. We have to use the Show Labels flag with the Cube CTL get pods command. Cube CTL, get pods, show labels. And here we can see the two key-value pairs that we have, creation, method, manual, and environment production. Now, let's modify the existing labels and add new labels to the spot. To add or override a label, we will use the Cube CTL label command, like this. Hoop CTL label, pods, the pod name example, followed by the new key-value pair. Let's call it app equals Flask, which is the type of web application that is container or pod is currently running. And now the port has been labeled. If we run the command again, cool, CTL, get pods, show labels. You can see we have a new label, app equals Flask. Now that we've covered how to add and modify labels, Let's learn. If we want to change the value of p existing label, we can write the same key with a different value with the additional overwrite flat. If we do something like this, Cube CTL, label, pods example. And then we write the same key-value pair. Again. It'll say that app or the key already has a value which is flask and overwrite is false. What we can do instead is something like this, where we can replace this here with another value, let's say at equals Python. And if we write the override flag, the label has now been overwritten. So if we show this output again, Cube CTL, get pods, show labels. You can see that it has now been changed from AP equals Flask app Python. Now the benefit of doing this labeling is so that he can use label selectors. We use the normal Cube CTL, get resource name wherever it is, command for this, the availability or specific resources. But by adding the l flag, we can also filter out the ones that have a specific label. That's what level selectors are. Something that this Cube CTL get pods l followed by the key and the value. So the key in this case, let's call it creation method, equals automatic. And if we run this, it outputs nothing. The reason for that is because we don't have a key that is similar to this. Instead, if we write manual, it outputs the example pot. Let's try something else. You can optionally not use the value parameter if you want to display all the resources that have a specific key. Let's try doing this as well. Hoop, CTL, get pods, flag, followed by the app keyword. As you can see, it still displays. And as you can see, it still displays. And as you can see, it still displays the example pot. I hope this video helped you understand what labels are and how you can use them to filter our resources. Thank you for watching, and I'll see you in the next. 21. Lab 16 Introduction To Namespaces: Hello everyone. Welcome to this video. Today, we're going to be learning what namespaces are and whether used in Kubernetes and know the enclave Linux namespaces that are useful process isolation. This is why we refer to them as Kubernetes namespaces. From now on. Let's start with Kubernetes. Namespaces are a way for creating non-overlapping groups for resources. They are useful when you're working with different teams or projects that also the same Kubernetes cluster, but you still need to divide each project into a different group. By the way, Kubernetes clusters are a group of Kubernetes namespaces that are used to run an application is usually just have one Kubernetes cluster and divide their processes and applications in two different Kubernetes namespaces. But it is possible to create multicultural Kubernetes architecture. It is impossible to do them with mini cube or Docker Desktop. So we can't test it out. But using a Cloud service like GCP or Google Cloud Platform is definitely a possibility. Kubernetes namespaces can also communicate with each other even though they are logically separated. Now that you know what Kubernetes namespaces are, let's create our own custom namespace. To do this, let's create a YAML file with this information inside it. First of all, we have the API version of kubernetes, Version 1. Then we have the kind of resource namespace. Then we have our metadata, which includes the name of our new namespace. Let's just call it new namespace. To create a namespace, we have to change the kind of resource to namespace and give it a name. Next, let's next, let's create the namespace using the Cube CTL create command. Next, let's create the namespace is in the cooked cereal create command. Then the file name. In this case, we saved it as namespace dot YAML. So we'll just write that. As you can see, my namespace has now been created. Once you have created a namespace, everything you do in your default namespace. Can we then in the other namespace by just adding the end flag. Here is an example of creating a pod in our new namespace using the YAML file we use last time. If I run this command Cube CTL get pods, you can see for 16 minutes, the example poll that we created in the last video has been running. But now, what if we want to create the same pot again in the default namespace? I can do this using this command Cube CTL, create the flag followed by the YAML file, example.bam demo. But then the m flag followed by the namespaces name, new namespace. And as you can see, the part has been created. If I run the command again, cooks CTL, get pods, I can't actually see the Newport. I need to add the n flag at the end of this command as well. Cube, CTL, get pods, and flag, followed by the name of the namespace, new namespace. And here you can see that the part has been initiated 25 seconds ago, which is what we just did. I hope this video helped you to understand what Kubernetes namespaces are and why they're used, and how to create such manage one. I hope this video helped you understand what Kubernetes namespaces are and why they are used to create and manage. I hope this video helped you understand what covenant is namespaces are. Thank you for watching and I'll see you in the next video. 22. Lab 17 Introduction To ReplicationControllers: Hello everyone. Welcome to this video. Today, we are going to be discussing what replication controllers are and why they're such an important tool in Kubernetes for high availability. Let's get started. Replication controllers are a Kubernetes resource that ensures paws are always running. If a pod goes down and replication controller replicates the port with all the containers inside it. Hence the name replication controller. Replication controllers also have the option to keep multiple replicas of the same pod running at all times. Number of pods falls below a specific threshold that application controller creates more. Replication controller has three parts. The first part is the label selector. Label selector selects a port to replicate with a specific label that is provided. The second part is the replica count. Replica count controls how many replicas of a pod must be running at all times. The third part is the part template. The template controls what the replication controller basis of to create a new pod. Now that we know what a replication controller is and what its benefits are, let's learn how we can create our very own replication controller. To create your own application controller, we need to create a YAML file for it. Copy and paste this code into a notepad application, like Notepad Plus Plus for Windows, TextEdit for macOS, and Nano for Linux. Let's have a look at what the snowfall does. First of all, we specify the kind of resource in the YAML file. Then we specify the amount of replicas inside the YAML file as well. Now, let's create the replication controller using the Cube CTL create command. Who've CTL create the flag followed by the YAML files name. In this case, it's replication controller dot YAML. And as you can see, replication controller has been created. Once it's created, you can run the cook CTL get pods command. And you can see that the three posts assigned to the replication controller are now running or being created. There's also a familiar terrain we can try called Cube CTL, describe cooked cereal, describe is issue, describe the contents of any Kubernetes resource. This includes replication controllers. Here's an example. Cube CTL, describe RC. Rc is the short fork replication controller. And then the replication controllers name replication controller. And we're here, it displays the ammo and settings of a replication controller. We can also use this other command called Cube CTL edit. The COO CTO Ed command is used to edit the YAML of any Kubernetes resource that you wish without creating a new instance. Here's the syntax of CTL, create, edit, video, edit, RC, then the replication controllers name. And as you can see, it opens up the notepad that belongs to this replication controller and I can edit it over here, save it, and then the effects will be changed. Or I hope this video helped you understand what replication controllers are and how to play an integral role in the Kubernetes environment by providing high availability to Potts. Thank you for watching and I'll see you in the next video. 23. Lab 18 Introduction To ReplicaSets: Hello everyone, Welcome to this video. Today, we're going to be learning about what replica sets are and why you should use them instead of replication controllers. Let's get started. The truth is that you don't really need replication controllers. In the latest version of Kubernetes, replica sets have been created as an alternative to replication controllers as they are much more effective. One of the main benefits of replica sets or replication controllers is the ability to be able to use more expressive label selectors. The only reason I showed you how to use replication controllers is one, because in some Kubernetes architectures that are outdated, replication controllers may still be used instead of replica sets. And two, because replica sets are harder to understand if you don't know how to use replication controllers in the first place. Let's have a look at the YAML file and commands that are used to create a replica set. First of all, let's create a new file in an application like Notepad Plus Plus in Windows, Text Edit, and macOS, and nano in Linux. Here's the YAML code that we're going to be using. There are two things to note here. First of all, the API version. The API version you're using here is a separate API used for specific resources called Apps. Second, we have a matched labels heading under selector. This match labels heading is there because we can use other options as well for more expressive labeled selecting. The match label selector acts just like a replication controllers normal selector. We'll use this for now with later on, we'll use the match expression selector, which is more verbose. Let's save the file as replica set dot YAML and switch over to our command line dash terminal window. To create a replica set will be using the Cube CTL create command. Cube CTL, create our hoop CTL. Cube, CTL create the flag and then replica set dot YAML. And as you can see, the replica set has been created. Now let's try looking at her patch, Cube CTL, get pods. And as you can see, we have our three pots. They're either running or being created. If we have a look at our ReplicaSet, cube CTL, get our S. We can see that we have a replica set that is generating three pods and keeping them running all the time. Let's try label selecting Cube CTL, get pods, the l flag. And then let's try a key and a value. In our YAML file, we use the key app and the value flask. So let's try this app equals Flask. And as you can see, it selects all three parts. Let's try it with just the key Cube, CTL, get pods, the l flag, and then just the key app. And again, it selects all three parts. This is working exactly like the replication controller. But now let's delete the ReplicaSet and create another replica set with a more advanced level selector. We can delete the ReplicaSet using this command, Cube CTL, delete RS, and then the replica set name, replica set. And now let's modify our file by using the YAML code that we're going to be using instead. The difference here is that we are using the match expressions selector instead of the match label selector. The match expression selector is more verbose because you can select the operator you want to use for level selecting as well. Which means that the key and the value entered. The match expression selector is more verbose because you can select the operator you want to use for label selecting as well. We use the in operator, but there are three other operators that you can use as well. The first is the not in operator, which means that the key and the value entered must not be included inside the label. The second is the exists operator, which means that the label should only have the key specified. The third is the does not exist operator, which means that label should not include any labels with a specified key that does not exist. Operator is the polar opposite of the exists operator and the not in operator is the polar opposite of the in operator. Now, let's create a replica set again. Who CTL Create are? Cool. Ctl create the flag and then ReplicaSet dot YAML. And as you can see, the ReplicaSet has been created. Again. Let's have a look. And we can see our free pods are either running or being created. Let's have a look at our replication set. Let's have a look at a replica set. And we can see that we have three pause here as well. Now what's the difference? Well, let's have a look at the Cube CTL, label selector, Cube CTL, get pods. And the same app equals Flask. And you can see that it works just like the replication controller and the previous match labels selector. Let's try it with just the key, who CTL get pods app. And again, it works in just the same way. This is because we use the in operator. If you use other operators, then we will get different outputs free for you to try it yourself. I hope this video helped you understand what replica sets are and why they use instead of replication controllers. Thank you for watching and I'll see you in the. 24. Lab 19 Introduction To CronJobs: Hello everyone. Welcome to this video. Today, we're going to be learning about Cron jobs or chronicle jobs and how they are used to schedule specific tasks for specific times and dates. Let's get started. The whole purpose of a Cron job is to be able to schedule a specific task to be automatically executed at its appointed time. Think of it like a YouTube premier, where a creator can sketch it exactly when a video is supposed to be released and let you to do the job automatically. This is one example of a Cron job. In this example, we are going to be creating apart using a Cron job. To create a cron job, you need to create an empty file in a text editor like Notepad Plus Plus for Windows, TextEdit for macOS, and Nano for Linux. Here's the YAML code. Like the replica set, this resource is only accessible in a beta version of the Kubernetes API, as you can see over here in the spec section, we have to specify the conjunct timings inside a string as so. In the spec section of the template section over here, we also have to specify the restart policy of our container. In this case, it is on failure. Now that we have a YAML file, let's create our conjugate. Let's run the Cube CTL create command. Cube CTL, create the f flag, and that the name of the file. In this case, it's Cron job dot YAML. When you run the command, depending on your time, the Cron job is either waiting to be executed or has executed for the first time. Let's have a look. Cool. Ctl, get con job. And as you can see, the Cron job is scheduled to run every 0, 15, 30, and 45 minutes of every hour. What this actually means is that it runs every 15 minutes until eternity. We can also delete the Cron job using the Cube CTL delete command. Cube CTL delete Cron jobs, Cron job. And as you can see, the Cron job has now been deleted. To understand how we configured our conjunct timings, let's have a look at the official Kubernetes documentation. As you can see, I am on the official Kubernetes documentation page. If I scroll down, this graph over here, explains how Cron jobs are configured in Kubernetes. There are a total of five sections in a Cron job. As you can see, there are five asterix to mark the five sections. The first section specifies the minutes from 0 to 59. The second specifies the hours from 0 to 23. The third specifies the day of the month from one to 31. And the fourth specifies the 1D itself from one to 12. And finally, the fifth specifies the day of the week from 0 to six. For the day of the week. Sunday can also be referred to as seven on some machines when you cancel out the 0. We can combine these sections to get the results that we want from a Cron job. In this case, we specify the Cron job to run every 15 minutes. The commas are used to specify. In this case, we specify the Cron job to run every 15 minutes. As you can see all here, we have commas. The commas are used to specify multiple values in a single section. If we just wrote one value like 0, for example, the Cron job would learn every hour at 0 minutes instead of every 15 minutes. The asterix themselves or here mean that the job will run it every instance of that section. In this case, it's every hour on every day of the month, on every month, every day of the week. You can see how this can become complex very quickly. So make sure that you practice. I hope this video helped you understand what Cron jobs are and why they pay such a vital role in any Kubernetes deployment strategy. Thank you for watching and I'll see you in the next video. 25. Lab 20 Introduction To Services: Hello everyone. Welcome to this video. Today, we are going to be learning about services and how they can be used to expose IP addresses for clients to connect to you. Let's get started. Previously, we discussed the high availability of pots and how you can create multiple replicas of a pod is in replication controllers and replica sets. Well, if we make multiple copies of a port and want to distribute the number of clients equally to each pod replica. How can we do that? This is done using services. Services have a static IP address which clients can connect you to gain access to the application. When a client connects, the service acts like a load balancer which distributes the load equally to all the pods it is connected to. Let's see how we can create our own service and attach R3 replicated parts to it. First of all, open up an API. First of all, open and empty file in an application like Notepad Plus, Plus for Windows, Text Edit for macOS and Nano for Linux. This is the YAML code that we're going to be using. Here. We're specifying the port which is going to be receiving our client request, which is port 8080. The clients are then redirected to the target port, which is port 5000. The reason we're choosing port 5000 is because our Flask application is exposing port 5000 and that port is ultimately our class destination. We will save this file as service dot YAML. Let's create this replica set using the Cube CTL. Create command. Cube CTL, create the flag, and then the name of the file, service dot YAML. And as you can see, the service has been created. If we use this command, Cube, CTL, get SVC, we can see that we just created a new service eight seconds ago. This service over here called Kubernetes, a default service that is used with any Kubernetes cluster. So you don't need to worry about this. Now, let's delete our Kubernetes Service, Cube, CTL, delete SVC service. And as you can see, our service has been deleted. If you only want to port forward temporarily. Instead of creating a service, you can use the Cube CTL port forward command. This is the syntax. Cube CTL port forward, the pod name. Let's use our example pod. And then the initial port, 8080, where clients are going to be connecting to you. And then our Forward report, which is port 5000. And as you can see, it is currently port forwarding from port 8080 to 5000. If I press Control C, then the port forwarding is going to be stopped. I hope this video helped you understand what services are and how they can be used to load balance clients across multiple replicas of your part. Thank you for watching, and I'll see you in the next video. 26. Lab 21 Introduction To Kubernetes Volumes: Hello everyone. Welcome to this video. Hello everyone. Welcome to this video. Today, we're going to be learning about Kubernetes volumes and how they can be used to share data across multiple containers in a pod. Let's get started. I'm like Docker volumes, Kubernetes volumes are capable of storing data belonging to all the containers inside a pod. It is important to note that there are many types of Kubernetes volumes. And Kubernetes, it would be impossible to cover all of them here. So we're going to be using the most common one, which is the empty dir volume. The MTD volume is just an empty volume used for storing data. Keep in mind that you cannot use Cube CTL. Get to view volumes in your cluster. This is because they are mounted to the specific part and its containers that you specify in the YAML file. Hence, it's like an internal directory that is only accessible to the programs inside the pods containers to create an empty dir volume, open and empty file in a text editor like Notepad Plus Plus for Windows, TextEdit for macOS or Nano for Linux. Then copy this YAML code. First of all, in the containers header, we're specifying the containers that we want to deploy with the pod. Then we're specifying the volume mount. The volume mount is where the container is mounted inside the volume. For both of our containers, we are using the root directory as the mouth. Then the volume section, which is this last section over here. We're defining the volume to be used. In this case, we are creating one volume called volume creative, right? Let's say the file as volume dot YAML. Now that we've defined our YAML file, let's deploy our part and see what happens. Cool, CTL, Create flag. And then the name of the YAML file, volume dot YAML. And as you can see, the point has been created. When we create the port, it also automatically creates our volume, called volume and mounted to the containers inside the pod. Let's have a look. Cool CTL get pods. And as we can see, we have a part called volume that is trying to create two containers. Or here, as you can see, there is a create container error. Nothing to be alarmed about. The only reason this is happening is because we are using the alpine image as one of our containers. Because the alpine image itself is in a proper application, it automatically exits of Chris's work is done. In this case, the only work that we assign to the container is to load the Alpine package. Hence, it automatically exits after a few seconds. I hope this video helped you understand what Kubernetes volumes are and how they are used to store data belonging to multiple pods in a container. I hope this video helped you understand what Kubernetes volumes are and how you store data belonging to multiple containers in a pod. Thank you for watching, and I'll see you in the next video. 27. Lab 22 Managing Pod Computational Resources: Hello everyone. Welcome to this video. Today we're going to be learning how to manage our part computational resources in Kubernetes. This has started. There are two things that we can do to manage our part computation resources. The first, resource requests, which is the minimum amount of CPU and memory, AKA RAM, that are part, needs to run. You can think of this as the minimum resource threshold. The second is to set resource limits, which is the maximum amount of CPU and memory that are part, is allowed to use if needed. You can think of this as the maximum resource threshold. Now that you understand the two ways, manage our part computation resources, it's time to deploy our own part with maximum and minimum resource thresholds. To create a part with resource thresholds, open and empty file in a text editor like no purpose for Windows, Text Edit for macOS and Nano for Linux. Then copy this YAML code. The important thing to note here is the section all here. The resources section. We have two subsections inside called requests and limits. The requests section specifies the minimum CPU and memory that should be provided to the part at all times. In this case, the minimum resources assigned are 200 millimeters for the CPU and 10 maybe bytes for the memory. The limit section specifies the maximum CPU and memory that should be given to the part if needed. In this case, the maximum resources assigned are one core, four, vCPU, and 20 maybe whites for the memory. Let's save this file as resources, WMO, and deploy our part to see what happens. Let's do play our part using the Cube CTL create, command. Cube CTL Create, and then the pods name, which is going to be, Let's deploy the pod using the Cube CTL create command and see what happens. Let's enter the Cube CTL create command. Hoop CTL create the f flag, followed by the name of the file, which is resources dot YAML. And as you can see, the port has now been created. Let's have a look. Who CTL get pods. And as you can see, the part is currently running. Let's have a deeper look into the YAML of the pod to see what's actually going on underneath. This can be done using the Cube CTL describe command. Here's the syntax of CTL, describe ADH, which is the resource name, and then the actual resource name, which is the sources. And as you can see here, is the ammo that is currently being used. So if you have a look over here, you can see we have a limits and requests section. As you can see, the limits, one CPU core and 20 megabytes for the memory. As the request, it's 200 millimeters for the CPU and 10 maybe byte for the memory. I hope this video helped you understand the two ways you can manage your pots computation resources. Thank you for watching and I'll see you in the next video. 28. 015 Software Development Principles: Hello everyone. Welcome to this video. Today, we're going to be doing something rather interesting. We're going to be exploring some software development principles, which are all acronyms that are commonly used in the IT field today. Let's get started. The first software development principle is YAGNI, which means he ain't gonna need it. This rule basically tells you to think of the core concepts that you may need to implement now and nothing else. Don't think about what you might need in the future. But instead, keep it short and simple and leave everything else to actually do in the feature. This enables you to focus on the important stuff and leave everything else for later. The second software level up and principal is dry, which means don't repeat yourself. It is the complete opposite of wet, which means, right every time you should implement the DRY principle and not the wet principal. Because the principal wants you to write code in a way where this easily compatible with new features. While the web principal wants you to rewrite the whole codebase when you want to add a new feature, which isn't ideal at all. The third and final software development principle is kits, which means keep it simple, stupid. This one's pretty self-explanatory and all it means is to keep it simple. Don't implement complex spooky algorithms for no real reason. Try to think of the simplest solution in your head and implemented the best you can. I hope you helped some of, I hope this video hope you learn some of the most important software development principles out there, even though they have very weird names. But then again, that probably makes it much easier to remember. Thank you for watching and I'll see you in the next video. 29. 016 Kubernetes And Docker Best Practices: Hello everyone. Welcome to this video. Today we're going to be discussing some of the best practices for developing using Docker and Kubernetes. Let's get started. The first best practice you should start implementing is not using unnecessary image dependencies in Docker files. If you don't need it, there's absolutely no point you should use it. This is especially true with containers because they need to be as light as possible in order to carry out their tasks quickly and efficiently. Even if you're confused whether you should use a dependency or not, it's better to try and find a way out of using it to be on the safe side. The second best practice is to use cmd and entry point keywords together, where the entry point keyword contains the main command to be run at the Cmd key word, convinced the arguments of their command to be run as well. If you go back to the containerizing a Docker app video in section number 3, you will see that I use both cmd and entry point keywords together as well. The third best practice is to use spaces instead of tabs in YAML files. The reason for this is because YAML doesn't register sections properly if you use tabs. That's why to be on the safe side, it's better to use spaces instead. The fourth and last best practice is that yellow is case sensitive. So other than entering names and labels, make sure everything else in the YAML file is lowercase. If you go back to the basics of YAML video, you'll see that the first year MFA introducing this course had spaces instead of tabs and it was also in lowercase. I hope this video gave you some helpful tips on how you should be developing using Docker and Kubernetes. Thank you for watching and I'll see you in the next video. 30. Kubernetes Capstone Project: Hello everyone. Welcome to this video. Today, I'm presenting to you the final capstone project for all the Kubernetes lessons that we have covered so far. There are four things you need to do to pass this final project. First of all, you need to delete the Docker container that you created in the previous Docker capstone project. Next, you need to create a part that contains the Flask application with a replica set attached to it. The replica set should have a replica count of five. Then you have to create a Kubernetes service that exposes a random IP address to the flask application. Quick hint here to find the IP address of the vector applications to be exposed. Have a look at the YAML of the part that we created. Finally, you need to change the angle of the part to give three replica counts instead of five. Was the researcher completed? You have officially passed the second final capstone project. Congratulations. Keep in mind that solutions have been provided at the end of this PDF document, the SRD, the project yourself first before moving on to the solution. I hope you enjoy our journey throughout this course and learned about the wonderful world of DevOps and the two most important tools that make it a reality, aka Docker and Kubernetes. Consider checking out some of our other courses that are good cause corrosion control and our Python beginners course. If you want to learn how to make Flask applications of your own. Thank you for joining me on this wonderful journey and I wish you the best of luck in your future. Keep on learning. 31. Docker Capstone Project: Hello everyone. Welcome to this video. Today, I'm presenting to your final capstone project for all the Docker lessons that we have covered so far. This capstone will walk you through the entire process of deploying a Flask application from creating the image for a pre-existing Flask application, to deploying the image as a container. To complete this capstone, you have to have watched all the Docker videos we have covered so far and practice the practical labs as well. You must be used to create in Docker files, images, and containers to attempt this capstone. All of these concepts had been explained in sections 1, 2 for this capstone is a four-step process. First of all, you have to create an image with the Flask application we use throughout the course inside it. Next, you have to upload that image to your Docker Hub account. Then you have to create a container using the Flask application image that you've created. But the catch is that you can't use the image from your local computer and instead, the image has to be pulled from your Docker Hub account. Finally, what's the container has been deployed? You have to delete the image locally on your computer. Once you have completed these four steps, congratulations, you have successfully pass this Docker Capstone Project. Keep in mind that solutions have been provided at the end of this PDF document. Fisheye the project yourself first before moving on to the solution. Next, let's move on to the Kubernetes Capstone Project.