Kubernetes and Docker: The Container Masterclass | Cerulean Canvas | Skillshare

Kubernetes and Docker: The Container Masterclass

Cerulean Canvas, Learn, Express, Paint your dreams!

Play Speed
  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x
103 Lessons (6h 51m)
    • 1. CMC Promo

      3:31
    • 2. Course Outline

      1:53
    • 3. How to make a web application?

      4:21
    • 4. Demo: Simple Web Application

      2:28
    • 5. A forest of VMs!

      2:08
    • 6. Hello Containers!

      5:08
    • 7. Hello Docker!

      1:34
    • 8. Demo: Installing Docker on Linux

      3:45
    • 9. Demo: Containerizing Simple Web Application

      2:25
    • 10. Stages of Containerization

      0:53
    • 11. How does Docker Work?

      3:51
    • 12. A quick look at the format of Dockerfile

      2:25
    • 13. Demo: Fundamental Instructions of Dockerfile

      5:48
    • 14. Demo: Configuration Instructions of Dockerfile

      5:29
    • 15. Demo: Execution Instructions of Dockerfile

      4:31
    • 16. Demo: Expose Instructions of Dockerfile

      4:15
    • 17. Demo: Miscellaneous Instructions of Dockerfile (Part 1)

      4:07
    • 18. Demo: Miscellaneous Instructions of Dockerfile (Part 2)

      9:26
    • 19. Demo: Docker Hub Walk-through

      4:06
    • 20. Understanding Docker Images

      3:01
    • 21. Demo: Working with Docker Images | Search, List, Push, Pull and Tag

      11:37
    • 22. Demo: Know your Docker Image | Inspect and History

      5:31
    • 23. Demo: Clean up Docker Images

      1:48
    • 24. A Container is born!

      1:52
    • 25. Container Life-cycle

      2:54
    • 26. Demo: Container Run Vs Create

      2:52
    • 27. Demo: Working with Containers | Start, Stop, Restart and Rename

      2:58
    • 28. Demo: Working with Containers | Attach and Exec

      1:44
    • 29. Demo: Inspect and Commit Container

      3:29
    • 30. Demo: Container Exposure | Container Port-mapping

      1:52
    • 31. Demo: Container clean-up | Prune and Remove

      2:01
    • 32. Multi-container Applications and Introduction to Networking in Docker

      2:41
    • 33. Container Networking Model (CNM) of Docker

      2:28
    • 34. Docker's Native Network Drivers

      4:05
    • 35. Demo: Create Docker Networks

      1:41
    • 36. Demo: Working with Docker Networks | Connect, Disconnect, Inspect & Clean

      5:01
    • 37. Demo: Ping one Container from another

      4:19
    • 38. Never lose a "bit" of your data!

      5:26
    • 39. Demo: Working with Volumes | Create, List and Remove

      3:33
    • 40. Demo: When Containers meet Volumes

      3:45
    • 41. Demo: Working with Bind Mounts

      2:35
    • 42. Demo: Hosting Containerized 2048 game!

      3:08
    • 43. Introduction to Docker Compose

      1:09
    • 44. Demo: Installing Docker Compose on Linux

      0:53
    • 45. Demo: Structure of Docker Compose file

      6:57
    • 46. Demo: Wordpress on Compose

      7:20
    • 47. Demo: Introduction to Docker Compose CLI

      2:51
    • 48. Introduction to Container Orchestration and Docker Swarm

      6:47
    • 49. Can Swarm handle failure?

      1:31
    • 50. Demo: VirtualBox installation

      1:29
    • 51. Demo: Docker Machine Installation

      0:37
    • 52. Demo: Setting up the Swarm Cluster

      2:22
    • 53. Demo: Initialising Swarm Cluster

      1:54
    • 54. Demo: Working with Swarm nodes | List and Inspect

      1:44
    • 55. Demo: Creating a Service on Swarm

      3:45
    • 56. Demo: Making a node leave your Swarm

      2:47
    • 57. Demo: Scaling and updating with Swarm

      3:25
    • 58. What about the more popular one?

      3:30
    • 59. Kubernetes: An origin Story

      1:49
    • 60. Kubernetes: Architecture

      5:30
    • 61. Demo: Bootstrapping Kubernetes Cluster on Google Cloud Platform

      19:35
    • 62. What are Pods?

      1:51
    • 63. How to operate Kubernetes? Imperative vs Declarative

      1:57
    • 64. Demo: Working with Pods: Create, analyse and delete (Imperative and Declarative)

      9:41
    • 65. Life-cycle of a Pod

      1:15
    • 66. Demo: Managing Pod's lifespan with Life-cycle Handlers

      3:04
    • 67. Demo: Adding Container's Command and Arguments to Pods

      3:27
    • 68. Demo: Configuring Container's Environment Variables with Pods

      4:33
    • 69. Labels, Selectors and Namespaces

      1:50
    • 70. Demo: Working with Namespaces

      3:47
    • 71. Demo: Pod Resource management

      4:34
    • 72. Kubernetes Controllers | Concept and Types

      0:54
    • 73. Introduction to Replicasets

      1:08
    • 74. Demo: Working with Replicasets

      6:41
    • 75. Introduction to Deployments

      1:05
    • 76. Demo: Working with Deployments

      4:37
    • 77. Introduction to Jobs

      1:15
    • 78. Demo: Working with Jobs

      3:02
    • 79. Introduction to Services and Service Types

      3:40
    • 80. Demo: Working with ClusterIP services

      3:45
    • 81. Demo: Working with NodePort Services

      3:34
    • 82. Introduction to Storage in Kubernetes

      2:33
    • 83. Demo: Mounting Volume to a Pod

      4:47
    • 84. Demo: Mounting Projected Volume to a Pod | Secrets

      4:01
    • 85. Demo: Good old MySQL Wordpress combination with Kubernetes

      7:47
    • 86. Blackrock Case Study

      1:34
    • 87. Node eviction from a Kubernetes Cluster

      2:33
    • 88. Demo: Rolling Updates | Rollout, Pause, Status Check

      3:52
    • 89. Introduction to Taints and Tolerations

      2:22
    • 90. Demo: Scheduling the Pods using Taints

      8:48
    • 91. Demo: Autoscaling Kubernetes Cluster using HPA

      3:33
    • 92. Demo: Deploying Apache Zookeeper using Kubernetes

      18:47
    • 93. Pokemon Go Case study

      2:40
    • 94. On-premise Kubernetes or Managed Kubernetes on Cloud? Make a choice!

      2:46
    • 95. Demo: Setting up Google Kubernetes Engine Cluster

      5:39
    • 96. Demo: Accessing GKE Cluster

      4:08
    • 97. Demo: Persistent Volume and Load Balancing on GKE

      6:49
    • 98. Demo: Kubernetes on Microsoft Azure Cloud

      11:55
    • 99. Demo: Extra - Docker UI with Kitematic

      8:37
    • 100. Demo: Extra - Minikube Series | Installing Minikube

      2:15
    • 101. Demo: Extra - Minikube Series | Getting started with Minikube

      10:20
    • 102. Christmas Greetings

      0:52
    • 103. Conclusion

      0:50
35 students are watching this class

About This Class

Containers

Containers are like that smart chef who can feed a whole family with just a bowl full of rice, and that's not an exaggeration at all! Containers are empowering businesses to scale fearlessly and manage their web apps hassle-free. They are the prime reason why micro and small enterprises are migrating to Cloud. All of this has undoubtedly led to an enormous demand for professionals with Containerization skills.

Which skills do you need?

  1. A platform to Create, Run and Ship Containers... like Docker.

  2. A strong tool to Control/Manage/Orchestrate your containers... like Kubernetes!

This Course takes you on a wonderful journey of learning Containers using key components of Docker and Kubernetes. All you need is very basic knowledge of Linux fundamentals like files and processes along with a bit of Linux command line.

The Containerization Journey with Docker:

Calling Docker the most widely used containerization platform would be an understatement. It has literally become synonymous to containers! Following topics covered under this course will solidify the logical base of this statement.

  • You can only love technology if you know how it works, and that's exactly why you will be learning Docker architecture and how its Components work.

  • At first glance, Dockerfile might seem like just another file describing app specifications. That's because it is probably the simplest yet efficient way to perform app building from scratch.

  • Docker CLI is intuitive and is inspired by your friendly Linux CLI. So adapting it is a piece of cake!

  • Docker images and Containers are the most portable and reliable way to ship your micro-service or web application without worrying about questions like "will it work on his infrastructure?"

  • Once you are fairly familiar with containers, Docker Networks and Volumes will open a whole new world of opportunities. Your containerization will become more reliable and will start serving its true purpose.

  • Docker Compose will combine all of the learning and take it to the next level with inter-dependent multi-container applications.

Once you have learned all of this, you will be craving to know what else can you do with containers and how you can take your containerization skills to the next stage!

The Orchestration Journey with Swarm and Kubernetes:

"With Great Power, Comes Great Responsibility"

Similarly, With Great amount of containers, comes a greater amount of orchestration!

  • You want to deploy 4 nodes on your cluster but can only afford to have one SSD node. And you gotta make sure that it only hosts containers which demand SSD explicitly. What to do?

  • You don't want to have idle containers chilling around your nodes and not serving even 10% of their capacity but you also want to make sure that your customers don't hit 404 when traffic is at its peak. On top of that, you don't have time or manpower to keep your number of web-server replicas in-check. What to do?

  • You are a pro-on-premise Kubernetes expert but your next project happens to be hosted on a public cloud platform like GCP or Azure. You're not scared but a little push will help you a lot! What to do?

This course is a one-stop answer for all of these questions. It covers both Kubernetes and Docker Swarm and makes sure that you are confident and capable to make your call when the time comes!

Even though a container orchestrator is nothing without containers themselves, Kubernetes seems to be the biggest breakthrough in the world of DevOps. This course explains Kubernetes from the start. No, I mean LITERALLY from the start (Origin! It,s an interesting story). It covers all of these important topics with examples so that when you finish this course, you can use and appreciate containers as well as we do!

  • Kubernetes Architecture (Components, States, Nodes, Interactions)

  • Kubernetes Objects (Pods, Handlers, Workloads, Controllers, Services, Volumes)

  • Operations (Sorting, Configuration, Scheduling, Scaling, Deploying, Updating, Restricting)

  • Application Examples (All-time favorite Nginx web server,Custom Landing Page, Stdout Logs, Wordpress blog with MySQL, Apache zookeeper etc.)

  • Kubernetes as a service (GCP, Azure)

  • Case studies (Blackrock, Niantic)

With that said, see you in the course!

NOTE: Course Codes Can be Downloaded from this Link

Happy Learning!

Transcripts

1. CMC Promo: Hi. Welcome to this container masterclass. Are you looking for a new or better job in develops? Are you interested in making a long term career as a Dobbs in Jinya? Do you think that containers, docker and communities are the best skills to pick up? Well, we must say your choice is great. Containers are one of the most game changing advances in technology. Industries on over the world are making their app development and deployment process faster , cheaper and more reliable. At the same time, even small startups are not hesitating to skill, since the financial risk and resources have lowered significantly. With such a large scale acceptance across the globe, containers have genuinely become a movement. As you might have guessed off. This has also resulted in significantly increased demands and opportunities for professionals and certified experts with containerized ation skills like docker and kubernetes. That's why if you look at Google trends, you can easily tell that these technologies are showing no signs of stopping. So if you want to learn containers from the basics and take your skills to a professional level, you're at the right place in right hands. We are a group off experience ingenious educators and certified experts on docker and communities, and we helped craft it discourse to make sure that with just basic knowledge off limits, you can proudly and peacefully learn the whole content. Speaking off content off the course, Docker is the most popular contain ization platform and Kubernetes is the most popular orchestrator, so it only makes sense for a masterclass to cover both. Totally starting from setups and Dr files, this course covers everything, including docker images, containers, networks, storage, docker compose and docker swarm. Once you have solidified your concepts of containers, you learn about the power off orchestration with kubernetes without rushing at all. You learn communities, architecture, workloads, services, volumes on a lot off orchestration tasks with interesting examples. You will feel the sense of accomplishment when you will bring your Web servers a WordPress block. Your favorite game are even an Apache zookeeper cluster tiny containers. You'll feel connected to the industry with really case studies off popular companies and products which used containers in recent times when everything is going to cloud, how can really contain us behind you will learn how to take your knowledge on hosted communities on public cloud platforms like Google Cloud and Microsoft Azure. That's not all goods and quizzes will make sure you don't make meadowsweet, syntax and semantics. Cheats will make a command divisions fun and quicker. Certification guidelines will help you choose proper exams and deter mined practice directions. We also acknowledged that containers are a growing technology, so both darker and communities are sure to help feature updates and new topics to learn. We will keep this course up to date to make sure you grow with containers as well. So what are you waiting for? Let's start or wonderful journey with contain a masterclass. 2. Course Outline: Let's talk about the outline off the course. We will start off with an introductory section where we will cover basics off of applications, containers and docker. Then we will take a deeper look into the architecture off Dhaka and learn how to write DACA files. At the end of the session, you will receive your first T cheat off this course. Then we will understand and work with docker images and containers using docker command line after understanding container networking model and how containers communicate in different situations, who will implement different doctor networks and play around them. Then we will take a look at different storage objects off Docker and create something using them, which will be both informative and fun. Once we're familiar with most of the doctor objects, we will take them to the next step where we can create multiple resources from a single file using docker compose. Then we will understand what orchestration means and do some basic orchestration. With doctors warm, we will make a close comparison between doctors warm and kubernetes, and when you are capable enough to make your choice between both, orchestrators will move to communities architecture and understand how it works. Then we will take a look at parts and other workloads off communities and perform a lot off orchestration for different applications. We'll also take a look at one of the most significant case studies off Kubernetes. We will see how to set up a news, hosted communities on cloud with demos and really unique case study, and finally will conclude the course with insight on certification exams. What these learnings mean for you and what kind of professional prospects would be potentially open for you. But that won't be the end off it. There will be a lot off upgrades and bonuses coming up regularly. Oh, and by the way, you can find all of the cords like Yamil file and Dr Files in the Resource Is section off this lecture. With that in mind, let's start learning. 3. How to make a web application?: before we begin to understand and work with containers in general, it is useful to take a quick look at how do we make Web applications some off? You might even ask what is over my application, since the term is quite widely used, but it is quite superficially explored. Just take a look at some of these examples. Productivity tools like G suit, social media giants like Facebook, video chatting applications like Skype entertainment platforms like Netflix payment services like PayPal or even the learning platform Like you. Demi itself are all Web applications in one way or another, which means you're using Web application interface at this very moment if we have to define it. Ah, Web or Web based application is any program that is accessed or a network connection using Http rather than existing within a devices memory. Off course. The definition is flexible, and you may choose to use one protocol or another. But on a broader perspective, it is all about not using your device like PC tablet or mobile for computing purpose. Instead, we let those mighty costly and reliable service do the heavy lifting, and we just access the result off are requested data from some Web interface Like http, this has so many advantages with just gon be overlooked. First off, all the performance off the applications will not be determined or limited by the hardware they are run on. It also means that we can almost say goodbye to those long lists of hardware requirements that we used to check before trying any new software. The requirements are still there, but they're quite standard, perhaps also improve speed. Now you might think that speed is just another performance perimeter. But here speed can refer to non Laghi performance, faster updates and overall faster growth off the organization. In general, the speed is also representative off shorter product development cycle. Since rollout off updates will be faster and user feedbacks can be taken and addressed quickly, as we just mentioned. Since the hardware requirement toe access, such abs are fairly generate. Like basic consumer toys and Web browsing capability, these applications can be accessed by wider range of devices by more and more consumers. In fact, many of the popular social media and utility APS also so variable devices the policy off not owning but accessing the data also improves the overall security off both consumers and hosts. And all off it leads to a better idea. Economy. It is not just about APS becoming cheaper after rise off her maps. Many revenue models, like Freemium be as you go and add based revenue generation have grown significantly. Not only Dad. The transactions have become more transparent on all ends like businesses, consumers and even the government's. Finally, the nightmare of business makers, which used to haunt them for decades, has become quite a Disneyland. Yes, we're talking about scaling companies don't how to invest in tow. Underutilized hardware taken skill as they grow since now, have we have a fair idea off? What are we maps and why do we use them? Let's get straight to the business. There are three steps to the process off making Web abs first, make it or build it on the suitable environment. Sickened. Wrap our packet with necessary support and instructions to ship or deliver it to the intendant line for consumer and finally rented all your machine or hosted on your server for others to access it. In next lecture. Who will get started with creating Web applications 4. Demo: Simple Web Application: Let's install Engine X Web server on our local machine, and the next Web server is the most vanilla example off a Web application. For your information, we are running open to 16.4 on this machine. And now let's start with switching to the root user privileges. As you can see, we have moved to root privileges. Now we can start our installation by first ofall downloading the PGP or pretty Good Privacy key for Engine X. The purpose off doing so is to make sure that when we install in genetics, the binaries are verified. The key has been downloaded. Now let's switch to E T. C slash AP directory with a less common let's list out the contents. We have a bunch of files here, but what we need is sources Start list file. So let's open sources dot list with Nano text editor. You can use any text editor you like, but in this course we will mostly stick to Nano. As you can see, this file contains a lot off links. Thes links our sources for open toe to find updates at the end off the file based thes two lines. These lines indicate the update part for engine X application when it gets installed, and we updated further in the future. It saved the file and exit Nano just to make sure we don't have any dangling into next installation. Then app get removed and the next common. This command will make sure that any off the previously installed instance off Engine X installation is completely removed. Now let's an app get of date to reflect the changes we have made in sources. Start list file. That's you see D command twice to go back where we started. Now let's install Engine X using apt get install in the next command. Once the installation is complete, we can verify it by going toe Web browser. An opening local host on Port 80 Well, the installation was successful. Engine X is earning properly. This was an example off installing and running the most Simple and vanilla Web application engine X Web server 5. A forest of VMs!: We have seen the advantages off where maps and how great here. But it doesn't mean that this coin doesn't have a flip side. There's just so many robots available on the market places. There are so many clones off some of really good ideas, and also many Clickbait applications, which turn out to be nothing but endless add boards. And unfortunately, even tat market is showing no signs off, stopping at all. And while the liberty off choosing the APP is still in the consumer's hand, all off these abs are being hosted, and they're generating traffic, occupying physical memory and storage in some off the data centers While working with mediums. It is pretty common toe have issues where the application was working smoothly on developer environment, but it was a train wreck on office machine. On even worse, it crashes on the client machine. Since we help transition from waterfall toe agile and a gentle develops models, updates are rolling out faster than ever. You. And if you are unaware off these models, just ask yourself this. How often did you receive updates for soft rest 10 years ago, and how often a year do you abrade the Facebook app on your mobile. While faster updates are good for businesses and consumers, it brings huge responsibilities on system Edmunds to make sure none of the updates compromised the stability off the app and to reduce downtime as much as possible. We end up using even more V EMS, all of the's Internet enabled application and rise off data. Science is generating huge amount off later and populating up thousands off servers every day with data basis or were all the use it off. Williams have just increased significantly. You do an option off their maps and micro service models, and, as you might have imagined, it has resulted into nothing but forests off servers all around the loop. 6. Hello Containers!: containers on abstraction at Application Layer, which packages codes in dependencies together, let's city a great and expand this definition further. Cadenas on abstraction at Application Leah, which packages cords and dependencies together. It means instead of just shipping the applications, container ship the application on during time environment as well, and he still managed to remain small sized. How? Let's compare them architecturally with Williams. In a traditional William architecture, we have, ah, hyper visor like hyper V or give'em on top of hardware infrastructure. These are also called type one hyper risers, since they don't need a host operating system. The guest horse a provision on top of hyper wiser and they acquire that isolated virtual environment. In some cases, we get type two hyper wiser, like Oracle's Watch Jewel box, where we do need a host operating system, and the rest of the part lays out pretty much the same. And this is how we in dysfunction, in a very broad sense, coming back to containers. The biggest difference compared to Williams is they don't have guessed operating systems, container and time environment is used instead of hyper wiser. What is it you may ask for now, Let's say it is software which manages and lands containers. Containers contain the application court on the dependencies, as we have just seen, The dependencies don't only mean external or third party libraries. It also means always level dependencies. The logic behind such implementation is all of the Lennox variance share the same Lennox colonel well, more or less so. There is no point in duplicating the same set of files or and or in multiple Williams if all containers can just access them in their own isolated environment. With that said, what about the files, which are uncommon or to be free size? The files, which are specific to the Oise? Well containers will contain them along with the application. And since the process off making the containers and running them are done by the same container and time environment, there will be no conflict off environment. If this information is too sudden for you, don't worry. The intention off mentioning all of this is just to let you know that how containers can attain same level off isolation as Williams, but while sharing the resources with host always instead of duplicating them and what happens because off that well containers consumed less storage and memory without stretching the facts at all. Gigabytes literally turn into megabytes. This way. Shipping them is easier as well. We don't ship the whole Wiens or a long list off instructions. We just ship ready to run containers. And since all of the necessary dependencies are also packed with the containers, is it worked on the developers environment? It will work on your machine as well, since we have reduced that resource is scaling becomes easy and cheaper. Even though you need to create 10 more replicas off a back and container, you probably want how to spend money on buying or renting a new server. In fact, if you need to roll out updates, you can still keep your applications running by extending your number off replicated containers, and you may achieve zero downtime. All of this sounds attractive and groundbreaking, but if we relate this to industries who are actually using containers well, Google pioneer using orchestrated containers years ago when they started facing or whelming amount of data. These days, companies like Expedia, PayPal and GlaxoSmithKline are Wallenda re providing themselves as the references and case studies apart from them. educational institutions like Cornel University and gaming giants like Niantic, which became a huge success after Pokemon go on all using containers, companies are gradually migrating to containers as many off you might already know. Drops jobs are increasing rapidly and containers are an essential part off the whole develops movement. In the next lecture, we will finally introduce ourselves with Docker and we'll get started with learning it. 7. Hello Docker!: It is time that we get started with the key player off our course. Dr. Docker is an open platform for developers and cyst amendments toe build, ship and run containerized applications. In other words, it is a container ization platform. His doctor the only platform off its kind. Well, no, Certainly there are others like rocket, but doctor is definitely the dominant one. By the time discourse is being created, doctor is tried and tested and it is a top choice off industry unanimously. It means if you want to sharpen your container ization skills, Docker is potentially the best choice for various reasons, such as more industries are using it so it can land you on more religion jobs. It is open source and has huge community support. Ah, lot off third party applications are available to support DR. Although it is built for Lennox, it can be used on Windows and Mac OS. For those who just don't have any other choice, there are other aspects as well, but there is no point on flooding your heads that information, which you might not be able to relate to, who will get into those later into this course. In the next lecture, we will install Docker on a Lennox machine 8. Demo: Installing Docker on Linux: In this demo, we will install Docker on open to 16.4 or even to Zaenal. Let's start off with running a standard apt get update Command. Once we're done with that, let's install some off the prerequisites, such as at transport https to make sure that our machine can communicate through https authority certificates, Go and Software properties Common, which contains some off the Golan objects which will be used by Dr and the installation is successful. Now let's download GP geeky for Docker and added toe on machine. And to make sure that we don't get a long list off processes which happen in the background , let's use hyphen f s s l flag to keep our reserve as small as okey and it shows OK, which means we got our GP geeky. Let's verify this key using pseudo app key fingerprint common. We can verify that we have received the correct key by searching for the last eight characters off the fingerprint, which should be zero e. D. F. Cd 88 This information is provided by Dr Itself, so that is not much for you to figure out. And yes, our key does have those characters as its last eight digits. Now run this command toe. Add a repository called stable and at the content off download dot docker dot com slash Lennar's slash Cuban to on it he helped provided the flag ls be underscored. Release hyphen CS to make sure that Docker provides correct files, which means files for Urban two senior are open to 16.4 to our stable repository. Let's run the update again to reflect the changes. Then Sudo apt get install Dr C E to finally install docker hyphen C E stands for Community Edition, which is one of the two additions provided by Docker. The other one is called Enterprise Edition, which is not free, so we won't be including it. In this course. The process has ended and we have successfully installed Dr CE or Docker community addition verify that our installation is successful by running pseudo docker run Hello World Command . This will run a container called hello world, which would only be possible if the doctor installation was successful. You don't have to pay much attention to the processes which are going on because we will be exploring all of them insufficient death in further models, as it says are, installation appears to be working correctly. You may have noticed that we have been using root privileges over and over to make sure that you can run Docker from your regular user as well. Let's perform a few more steps. First, let's add a group card docker using pseudo group at Docker. Now let's add our user, which is 22 this stalker group, and provided root privileges. No, let's try to run hello World container without root privileges with just Doctor and Hello World Command and we get the same results. 9. Demo: Containerizing Simple Web Application: in first demo we had installed and run engine X on open to 16.4 locally in the demo. After that, we installed Docker. You might find a pattern here, and you might have been able to figure out that in this demo we are going to run in genetics as a docker container. Unlike Hello World Container, we will do this in a bit more elaborate way. Let's start with pulling an image called Engine X latest from Docker Hubs Engine X repository by running the command Docker image. Pull Engine X, Kahlan Latest. This will download or pull an image called engine X with latest attack, which can later be run as a container. Let's see if we help God. Our image on Docker Images Command to show the list off images, and here we go. We helped to images. First is Hello World, which we used in the last demo, and second is Engine X, which were using in this demo. Both of them have attacked, called latest and they have different sizes. Now let's run this image as a container using docker containers uncommon, followed by I T D flag and name or Cantina Web server engine X with hyphen b common. We are mapping the port 80 80 off our local machine to contain a sport 80. And finally we're mentioning the image name and the next latest, which we have just pulled recently. What we got is a container I d off the engine X container. I know all of this terminology sounds pretty new and pretty abrupt, but don't worry in this demo, our only purpose is to run into next successfully. We will go through all of these terms in sufficient details. When the time arrives. Let's verify that all container is running by running the command docker PS hyphen A. And as you can see, Web server engine X container is running, which is built upon image called engine. Next latest. Finally, let's see the output off this container by going to the Web browser and opening our local host sport 80 80 and it works successfully 10. Stages of Containerization: in previous model we got introduced to containers and then an instance off it. In this section, we will dig deeper into the process off Contain ization with reference to docker before understanding Doctor in detail, it will be efficient to visit a few times. Briefly doctor, files get built, Docker images get shipped and containers are run. You can consider Docker file as blueprint off Docker image if you remember. Well, we have already come across Docker Image and Docker container in our engine X contain a demo. So now that you know all of these three files definitely not in detail. But at least Wigley We can move on to the architecture off, Doctor and come back to these files later. 11. How does Docker Work?: No, the natural progression of talk would be. How does Dr Work Docker Ecosystem has a number of offerings where some of them are more useful than the others? We will begin with Docker Engine, also known as DACA in general, and we'll look at other important ones as we move further with this course. Let's take a look at the architecture of Dr Darker and the whole process off. Kontinen ization revolves around three main components. Docker Client, Dr Host and Docker Registry. Dr. Klein is the machine are medium, through which we as users, interact with docker. The two basic ways off interaction are doctor CLI with stands for command line interface on Docker AP Eyes, which again stands for application program. Interface commands can be directly used from clan terminal, whereas AP ice can be used to make some applications. Doctor Doctor, as we have seen in our earlier demo, both Dr Pull and Dockery on our commands covered under DACA CLI, we'll explore more such commands as a cover for the topics. Dr. Host Dr Host is the machine which actually performs the task off, contain ization. It runs a program or piece of software called Docker demon, which listens to and performs actions. Asked by Docker client Dr Niemann builds docker file and turns it into a docker image. Doctor files and darker images can directly communicate with Dr Demon. Either images can be built from docker File are they can be pushed or pull from Dr Hub. In any case, that task is to be performed by Dr Host using Docker demon. Dr. Images can also be done as containers. Containers can communicate with Dr Demon by Dr Images. In other words, any changes made to the container are also reflected on the docker image. Temporarily well explode these parts individually soon enough. It's possible that Dr Klein and Dr Host are actually the same machine as well. But the function off Dr Klein as a piece off software is limited toe passing the user input and displaying output provided from Dr Host Human Find. Docker Registry has the simplest component off the locker architecture. It serves as a place to store docker images and to make them available to others. The engine X image, which we used earlier in our demo, was pulled from Dr Registry Dr Plan talks to Dr Demon Bi directionally where it passes the request and receives the results. Where is Dr Demon and Docker Registry can talk bi, directionally push and pull images. Let's some off all of the's three components off Doctor architecture. First of all, we have Doctor Client, which process requests through Dr Seelye and A P IES and receives results to be displayed. 10. We heard Dr Host, which also and Stocker Demon and works with docker images and containers. Finally, we hold Docker Registry, which acts as a universal place toe access available docker images. Now we can go back to those 34 months which we saw earlier. Dr Files, Dr Images and Containers, which respectively represent, build ship and run in the next lecture will take a detailed look at how Dr Files work. 12. A quick look at the format of Dockerfile: we can now go back to the 34 miles, which we saw earlier. Dr Files, Dr Images and Containers, which respect to, represent, build, ship and run. First, let's focus on docker file. It is a sequence ships set off instructions intended to be processed by Dr Demon Availability off such format replaces a bunch of commands intended for the build up off a particular image. It helps keeping things organized with time. It has also turned out to be the primary way off, interacting with docker and migrating to containers in general. As for working it, sequential instruction off Docker file is processed individually, and it results in a file, which acts as a layer off the final doctor image, which will be built. A stack off such sequence layers managed by a file system, becomes a docker image. The purpose behind this is to enable cashing and ease up troubleshooting. If to knocker files are going to use the same layer at some stage, darker demon can justice. He used the pre created layer for such purposes. No, let's look at the structure used for writing the doctor files. Firstly, it is a file with no extension at all. And a general rule of thumb is to name the file as docker file with D capital and no extension, you can use any text editor to create the file. Just make sure you don't put an extension. The purpose behind doing so is to make the file compatible toe pass for auto builders used by Dr Toe. Build the images although it is not an ironclad rule. And you can name the docker file according to your convenience as well, which we will look in the future Demos. What you see inside the docker file our instructions, Toby passed on the instructions can be generally divided into three categories. Fundamental configuration and execution instructions. In next lectures, we will write our first docker file and understand these instructions one by one. 13. Demo: Fundamental Instructions of Dockerfile: Let's write or first Docker file and understand its fundamental instructions. Let's see, what is your current working directory? We are in the 20 directory, which is the user's name on that home directory. It is quite likely that you would also be in a similar location once you have downloaded the material provided in courts and lecture notes and unzipped it. You should also have a directory called CC underscored Docker, where C, C and D are capital. We're only looking one level deep three in our present directory. And if three is not available on your machine for some reason, you can verify the CC. Underscore Docker directory simply using L s command. Now let's navigate to the CC docker directory Just to get you familiar with the structure of the directory, you will find one directory for each segment or module and subdirectories for respective demos. If you don't intend to write the files by yourself, why learning you can simply use the appropriate files for each demo and run the results. Let's go further toe as to directory, which contains all of the required cords and files for this segment. We are in the S to at the moment. Finally, let's navigate to the directory name D one and verify that we are anti right, please, for the Let's create an empty Docker file, which touch command. I'm creating this file because I want to show you step by step, how to write a dr file, But you will find a predate on docker file in the directory were using Nano as the text editor, But again, you're free to choose the one you might be comfortable with. And with this, let's open the empty Docker file and start writing it. The first instruction that we're providing is the A R G R Ark. Instruction art is used to define the arguments used by from instruction. Although it is not necessary to use art and not using so does not cause any harm to the resulting image directly. Sometimes it helps keeping parameters such as versions under control. Here we have defined argument. Good underscore version equals 16.4 which means that we are going to use something which will have the court worsen 16.4 in a pretty tough since. Remember in a very rough sense, you can treat it as a declare a bill directive in general programming such as Macros. But again, this argument will only be relevant for the from instruction and next is the from instruction from is used to specify the base image for the result and docker image that we intend to create in any case from instruction must be there in any doctor file, and the only instruction that can be written before it is the art which we just saw generally from, is followed by an operating system image or an application image, which is publicly available on Docker Hub. Here we want tohave open toe as our based operating system image with court version, Our toys version 16.4 So the name off the image is followed by a Kahlan, and argument is mentioned in curly braces, preceded by a dollar sign. As we have already mentioned in our instruction, our court worsen its 16.4 so it will be passed as an argument, and the based image for this doctor file will be considered as we've been to 16.4 to add a little more substance to the image were also including a set off run and CMD instructions, but we will explore their meanings and applications in next demos. For now, let's just save this file again. It is important to remember that we must not give any extension to the docker file and should mostly name it. As Docker file itself, it is time to build the docker file and turn it into an image. Let's do it with Dr Bill Command. The hyphen D option is used toe tagged the image or, in other words, named the image to make it easily recognizable. We'll attack the image as I am g underscore from. And the dot in the end directs docker to the docker file stored in the prison directory. As you can see the images being built up step by step, let's understand each of these steps first, step off storing argument was fairly simple, so it finished quickly. Second step involves setting of the base image, and it is doing so by pulling multiple file system layers from Docker Hub and stacking them in proper hierarchy. Once it is complete, it moves to the third step, which is to updated toys, and we have already provided the permission with wife flag where? Why stands for yes, Once he steps are done, our image is big. We can very fight that the image is built. Why are Docker images come on? As you can see, we have four docker images among which I am g underscore from is the one which we created recently means 11 seconds ago while others are previously created or pulled. 14. Demo: Configuration Instructions of Dockerfile: in this demo will go a step forward with writing doctor file and will explore configuration instructions. Again, We're in S two directory, which contains individual directory for every demo. Let's navigate to directory called D to There we go. As you can see, there is a DR file already present in this directory. Let's open it with Nano. As you can see, this docker file also has a base image off open to 16.4 mentioned using from instruction as described in previous demo. But this time we have skipped using our instruction and directly provided the version number. Now we have run and envy, which are configuration instructions, although they are not the only entries in the list of configuration instructions. But these are the ones that we will cover. Income demo. Let's go through them. One by one, John asks Docker to execute the command mentioned with it on top off the base image, and the results are committed as a separate layer on top off the base image layer. Here we have more than one mentions off run, and each one creates its own separate lier with the first run instruction we have provided to commands toe update the S install Carl and clean up later on. Where is the second run? Simply makes a directory named Cords Under Home Directory. Don't confuse it with our host machines. Home Directory Toe Here we're talking about based image, always Home Directory and the courts will be created on that base image, not on our host machine. Then we have used E NV, which is another configuration instruction. It does what its name suggests. It sets up environmental variables. We have used it three times to set user shell and log name environment variables just like the previous demo we have you CMD but we'll go into that later. Again. We will use the doctor Bill Command toe build this image. But this time we will tag it as I am G underscores on hyphen envy to separate it from the previous image. As you can see in this build, the first step is directly involving setting up the base image. Since we have skipped abusing our instruction, Step two will perform all of the commands used in first run instruction and will perform the commands off second run instruction, which is making a directory step 45 and six will set environmental variables as mentioned in the Docker file, and the super fast Step seven will get our image ready to run. Let's list out are available images with Docker images Command. These images are the ones currently available on the host. Our top image is that I m g underscored Run, envy, image. Now let's go one step further and run this image as container with doctors and hyphen i d d command the I e. D represents interactive teletype enabled and detached respectively. We're naming that to be running container as Kant underscores Run envy And the target image is I am G underscored Run and we which we have just created. The command was successful and we have just received the unique container idee provided by Dr for our Container. Here we have two containers earning, among which first is the one which we run recently. It is up means running for five seconds and it is ending the bash common. Now let's execute or containers bash command here The bash common and the process was running in background due to the detach flag set while running the containers. Now we had bringing it forward. As you can see now we're in the root directory off her. Kant underscores on in we container. Let's list out the directories here. Yes, the structure looks similar to a regular Lennox instance. Now let's verify the environmental variables, which we had set with Ian Reconstruction while writing the Docker file. As you can see, the user shell and love name variables are just as we had set them up. Now let's navigate to Home Directory As we list it out. We can also verify the creation off the cords directory, which was supposed to be created from our run instruction off the docker file. Finally, we can get back to our host and Weinmann by exiting the container using simple exit command . 15. Demo: Execution Instructions of Dockerfile: we are back in our S to directory. Let's navigate to directory the five and list out the contents off it. We have a Dr file for this demo stored in here. Open it in a text editor. Multiple new instructions. We have been using CMD a lot in the previous demos, but we will dig deep into it in this demo. Let's start with the most basic yet important instruction from will set open toe trustee as the base image for this doctor Image label is a key value pair, which adds the meditator toe, the image we have added to labels as key value pairs in multi line argument for a label instruction create a key has a value civilian canvas, while version key has 1.0. Next one is a run instruction, which will a bleed the package list off the base image in not interactive manner. Then we have entry point. As the name suggests, Entry point will allow the user to configure the container at a starting point. In other words, Entry Point will bring back the container to the starting point whenever a container is said to restart. For this talking image, the entry point is defined in exact form, which is also the preferred one. It will execute Ping five times when the container starts running. Last but not the least. It's CMD instruction we have seen so far that CMD provides the default command to the executing container. But if Entry Point is mentioned in the docker file, then CMD will always be executed after entry point. When CMD is defined in exact form and does not contain the executable, then it will be treated as a parameter off the entry point instruction. There can only be one cm the instruction available in docker file among multiple CMD instruction only. The last CMD instruction will be in effect for this docker image. CMD instruction is an exact form without executable, which means that it will provide the local host as the parameter for the executable off entry point, which is thing. If we sum up entry point and CMD here, we have set that maintainer toe pink the local host five times. As soon as the container is up and running, let's exit from the docker file and build our image sequence. Shelly, we will build Docker image based on the docker file in current directory and tag it as I am g underscore and three cmd build Context is sent to Dr Demon and it will download the open toe trustee image from the Docker hub to our local doctor Storage. Now based image has been downloaded and it is running in an intermediate container toe build open toe trustee environment. To build our application, Steptoe will create labels for our docker image. Step three will execute Theron instruction, which will update the open toe trustee base image and comic. The result in a new intermediate container step for will set the starting point off the container at ben slash ping. And the last step is CMD instruction, which will provide the local host as the parameter to the entry point to execute and to start off the container. At the end, all the layers will be stacked sequentially by Dr Demon and a final I am G underscored entry CMD image will be created with its image i D and latest tag. Let's check out the list of images available in our local doctor storage. Which doctor images Come on. As we can see, I am g underscore entry CMD Colin Latest has been built and stored on our local doctor stories. It's time to run a container based on that image type doctor run, hyphen, hyphen name, con underscore and three cmd, followed by I am G and three cmd but its end up and here we go Cantina is pinging our local host as for entry point and CMD instructions, and it is successfully pink. The local host for five times five packets have been transmitted and received successfully without any packet loss, which means or application is earning perfectly. Now let's check that status off on entry. Cmd with Dr Pierce hyphen A. Common as we can see, the container has exited with an error. Court zero after finishing its default task, which means containers execution was successful. 16. Demo: Expose Instructions of Dockerfile: Let's navigate to the six directory and list out all of the contents off it. We have a docker file for this demo. Available here. Opened the docker file in the text editor. As we can see, it contains four. Dr Instructions from instruction will set the base image Cuban to Colin, 16.4 as its base image for this docker image. Next instruction is run, which will update and install into next on open toe 16 points out of four base image. We will chain Subcommander off run instruction with Logical and Operator, which means in order to run second subhuman first common should be a success here. If we consider the sequence apt, get update off the base image should be a success. In order to install engine X after engine X installation AB get removed. An arm r f slash war slash lib slash ap slash list will clear up Local depositories off retrieved packages. Next instruction exposed is a type of documentation which will inform Dr about the board on which the container is listening. Keep in mind it does not publish the port, but it fills the gap between the doctor image builder and the person who runs the container . We have documented with exposed instruction that this engine next container will listen on port 80 cm. The instruction will make engine X application run in foreground by turning off engine X as a demon process. Exit from docker file. Build the Docker image with Dr Bill Command from the doctor file available in the present directory and tag it as I am g underscore exposed. The bell context is sent to Dhaka Demon as we already have open to 16 points in all four emits in local doctor stories. Dr Demon does not download it again. It is cashed in Step two. Jane Drunk instruction is being executed one by one. First, it will update the package index off the base image open to 16 points out of four. After successfully a bleeding, the image and Gen X will be installed on the base image and at the end, local reports off retail packages will be cleared up. Step three is to expose the port 80 off the container in order to inform Dr that engine X ab will listen on Port 80. The last step is setting up the deformed command CMD, which will set the engine x app as the foreground process in this container. Our image has been successfully built and tagged as I am g underscore exposed. Let's list out all the images in our local doctor storage. There we go. I m g underscore Expose has been successfully created and stored on DACA. Let's run a container based on I am Jay Exposed Image. My doctor and hyphen ITV hyphen hyphen arm Adam flag will automatically remove the container once it has stopped. Follow it with container named con Underscore exposed, followed by hyphen p 80 80 Colin, 80. Which means map the Container Sport 80 with host sport 8080 in order to access engine ex service and finally really give the image name, which is I am G underscore expose press enter and we got to contain a I D. That's list out. All the running and stop containers with Doc appears hyphen. A command or con underscore. Exposed is up and running for seven seconds. The containers Port 80 has been mapped on Port 80 80 off the host so that we can access and the next Web server on our favorite Web browser. Now go to your favorite Web browser, minus chrome and type http local host calling 80 80 in the and a spa press enter and we concede the default home page off injure next Web server. 17. Demo: Miscellaneous Instructions of Dockerfile (Part 1): Let's have a reality or PW d check. All right, We are in Demo eight directory and has always been released out. The components we can see to Dr Files now, before you can raise your eyebrows with a ton off surprises like Why? Why do we have to Dr Files in one directory? Isn't that a bad practice? Wouldn't doctor get confused? Allow me to clear a few things here. There definitely can be more than one doctor files in a repository or a folder, but they can not be named as docker file. Firstly, you're always won't allow that. So there is not much to argue. And secondly, naming it as Docker file has just one purpose. Make image building command smaller. Using docker help Auto builder. If we simply have files for different names, which are essentially Dr Files, Doctor won't bother about it. It will simply build the file we mentioned with that. Out of the way, let's have a look at these files we have Child and Parent Docker file. So let's give proper respect to the parent. Henry, you at first. All right, so this is a docker file and the right up It's pretty simple. We just have three instructions among which do are fairly familiar to you. The Middle window is a new entry on our Learning Co we have on build instruction. Its purpose is pretty simple. It allows us to specify a command which will be passed on to the next image that will use this image as its base image sounds confusing. Well picked this example. We help open to 16.44 as our base image, and we will create some image from this docker file. Now, if that image will be used as base image off another doctor file, it will be just like 1 to 16.4 since CMD can be over written by next docker file CMD or entry point instruction. So if we want to help some changes persisting while using this image as based image like having a file called greetings dot txt, created in the temp folder we need to use on bill instruction, we are a coined the sentence Greetings from your parent image toe, TMP slash greeting start txt and expecting it to exist whenever we used the image created from this doctor file as base image with that clear in our head. Let's exit this file now Let's open child Docker file. We just have to instructions. 1st 1 mentions the base image called Papa Open Do latest and read it that come from. You may wonder it is the name off the image which we will soon build and we're running Bash with CMD instruction. I really we want Papa Bentos greeting start txt to be visible in this image. Now let's build the parent image using docker build hyphen F common, followed by the name off Docker File Target Image name and adopt to indicate the president directory. Similarly, let's build baby open toe image from Child Docker file. Check this out during first step off setting up base image. It is executing a bill trigger, which has been inherited from on bill instruction off base images. Docker file. Let's see if both of her images are listed or not. Yes, they are John, a container from baby open toe image and name it baby container. When we execute this container, we had straight to the root off its base images open toe us. Let's navigate to TMP director using CD and see if greeting start. Txt is present. Yes, it is here. We can also cap it and verify its content. Which is to seem that we had a court into it. We can exit this container since our on bill demonstration is successful. 18. Demo: Miscellaneous Instructions of Dockerfile (Part 2): Welcome to the Conclusive Lecture Off Docker file section. In this lecture, we will achieve three objectives. Understand and implement container health Check using docker files do the same with stop signal instruction, and while we are added, we will also contain eyes a sample flask application. As always, we will start by knowing our present working directory, which is the moon nine under CMC. If we checked the list of components, we helped three files this time. Apt Art by docker file and requirements start TXT, which is a text file. Let's explore them one by one, starting with abduct by. We're looking at a sample flask application toes or familiar with fight on and have worked with flask earlier will find this file a piece of cake and those who have not touched upon flask. Don't worry, there is nothing Incomprehensible. Flask is a Web server Gateway interface framework. In other words, in case off fightin, it allows Spuyten application toe talkto Web servers in orderto forward and receive web AP requests and responses. We have started our file with a simple import statement to import flask class from flus library or framework. If you're wondering why in the world. Would we have flask framework or Pitre installed? Hold your breath. Those pieces will join the puzzle soon enough as well. Next up, we're creating an app instance from flask class. It's argument is name. This name String can be replaced by any other that you like, but it is recommended to keep it name if we are running a single model application. When the flask app is compiled, name is replaced by Main, which will make our instance. The main instance. Next line is a decorator, which is a rapper to describe a function using another function as its argument. The purpose off this decorator is to shout the incoming requests to forward Slash, which is comprehended as local host Port 5000. Next, we're defining the function, which will run within this Web application. Instance. It is called C M. C. And it was simply help printing a string called Welcome to the Container, masterclass by civilian canvas as its returned value. Finally, we're instructing flask that if our instance is mean, which it is, then run this application and make it publicly available. Let's exit this file Next up. We have the smallest file in the whole course called requirement dot txt. If you remember, during container introductory theory, we had mentioned that containers reduced a long list off requirements. Witness it. We just have one entry in the file called requirement dot txt, which is flask version 0.12 point two. But we will not mended you to install that externally either. After all, containers are isolated and one mints so every installation should ideally happen during the imagery in time itself. Ideally, speaking off images, we need a doctor file toe build this app. So let's exit this file and open tap. Starting off, we helped open toe base image and we're running an update and installation off. Bite on pip and call. We're copping all off the contents off this host directory toe app directory off base image and making it working Directory. Next up, we're installing contents listed in requirements. Start txt. We could have simply mentioned flashed there, but this is a standard practice to list out your requirements in a separate file and install them using the file itself. It also makes the readability off the docker file simpler for other developers now that are prerequisites are set up. We can then app not be white as a pipeline application. Using CMD instruction before CM Lido, we have health check instruction. Health check is a way to perform a user defined or developer defined periodic check on container to determine whether it isn't desired situation. Also known as healthy or not, this instruction comprises off three aspects are three types off arguments in double time out and come on in total defiance a timeframe after which periodic health check will be depleted. We have kept it 10 seconds, which means health check will be performed on the running container every 10 seconds. Time out, little minds went toe back off. If the container remains unhealthy, backing off would imply to perform a container restart. This brings us to another question. How do we don't mind if the container is unhealthy? Dr acknowledges the fact that every container or application would have different definitions off being healthy. For example, in this flask application, just because resources are properly allocated and the container is running does not mean the application is working correctly. What if the Web server is not solving anything? What if we come across 401 or 404 errors where the desired webpage would not be available. It would completely kill the purpose off this application in the first place. That's why we help command or CMD argument. The argument executes commands followed by CMD, and the results define whether the container is healthy or not. So it is up to us to provide the proper commands which can correctly deter mined container situation. In this case, we're providing a command with logical our condition, which means either this are tapped. Our first command is calling local host on Port 5000 which would display the result off flask application. But we help attached a failed flag to it, which means that if the common encounters enter 401 or 404 it will not show any output. Not even that default response such as this speech cannot be displayed etcetera. In that case, second command will be performed which returns exit status. One reason for writing the second command in such a way is that health check instruction considers one exit status as unhealthy. So we are going the address serving flask application every 10 seconds, and as long as it doesn't encounter any solving era. It will not written exit status. One which will mean the container is healthy and in any case, it does encounter ever 401 or 404 It will don't exist. Status one, which will mean the container is unhealthy and t off. Such alterations will cause a back off. It is mandatory to write health check before CMD instruction toe always overriding it. Next is stop signal when we terminate a docker container Doctor sense Sick dome signal toe . The Lennox process responsible for running the container sick dome gracefully kills the process, which means it clears out all of the cache and memory before detaching the process from X parent and freeing up resources to be used again. But it might cause a crash or endless loop if there's a fatal error or vulnerability exploitation in the application, which means it becomes necessary to use SIG kill instead of sick Tom, which immediately kills the process. Stop signal allows you to replace that before sick Tom with the signal you desire to provide. In other cases, you might even have to use SIG. Us are one or six top, depending on the nature off your application were replacing sick dome with PSA kill in stop signal instruction. With that said, Let's save this file and exited. Let's build the image and name it flask up using docker. Build common. The building is done. Now Let's run the container out off it and call it flask. There we go. No, let's have a list of thes containers. 1st 1 is flask, and if you take a look at its status, it shows up and running along with healthy, which means the health check is being performed. If you want to verify whether the health check is correct or not, Local host on Port 5000 and there we go. It's the output off our flask application. Finally, let's stop the convener. When we list over containers again, we can see that flask has just stopped recently, but unlike other containers, it stopped with a record 137 which in terms off Lenox indicates exit court off the process , terminated by sick, ill or stop signal instruction, also worked correctly. It seems like we have achieved all of the three objectives off this lecture, so see you in the next one 19. Demo: Docker Hub Walk-through: it is about time to go beyond our little host machine and get to know the wide community off. Doctor. The best way to do so is to get started with Docker home. Get back to our web. Rosa Goto help that, dr dot com And where we land is the room pitch off, doctor, Huh? Dr Hub is a cloud based service hosted by Dr Itself, which allows you to build, link and manage your doctor images. It also provides some off the production great useful features like automated build. Just for your information, the auto build it that we used in our previous section where we did not provide any name off file while passing the bill Common. And yet, Dr Build the Content Off Docker file is also hosted by back and service off Doctor Hub To access its provisions in first, we need to create an account which is totally free, and all it needs is a generic set off data like user name, email, I D and password. Once we have added that, let's agree to the terms and services and prove that we are not robots. After this step, you should receive an email on the idea that you provided and you should click on the activation link. I mean, that's obvious, right? Once you have activated your account, you will land on a page. We should look similar to this one. It is card the dashboard. It displays your user name and provide links to almost everything that you might want to do on Dr Hubbert. First of all, we are on the repository stack where you can explore the globally available repositories or create one by yourself. You can also create an organization which SOS as a unit off people management about reposed themselves. It is useful if you are not an individual, but you're acting for an organization or on behalf, often organization. And since we have not created any polls yet, we don't have any start repose our contributions in general. On the panel about these steps, we have a few lings. First off them takes you to dashboard where we already are so clicking on it will be pretty much pointless. By clicking on the Explorer option, we get a whole new world off popular repositories created by individuals and organizations around the world. To be honest, one of the aspects, which makes doctors so popular and loved Among the ingenious, is toe enormous contribution by the community in such a short time, and the fact that Dr acknowledges its importance and provides one place toe access it. All these reports are ordered by the number of pull stay have received, and our Engine X, which was used in our first hour container of discourse, is on the top of the list organization option provides us another willing to stuff regarding organizations and create menu provides us a list off options where we can create either repo organization on an automated bill. An automated build can be created by providing bill context, which is generally a repository containing the docker file named Docker File on your host machine. In other words, it is the Web version off the short docker bill Common that we have been using in previous section. Since it is the Web version, we have to use a court and version management service like get her orbit pocket, and finally, we have a list off options for our own profile, where we can do some customization, like adding more information about ourselves, changing passwords, getting some kind of help our most importantly the documentation. In next videos, we'll understand Dr Images with greater depth and work with them. 20. Understanding Docker Images: we have already studied and worked with Dr File. It's time to focus on docker images, as we have seen previously. A docker image is a collection or stack off layers, which are created from sequential instructions on a doctor filing. The layers are read only, although there is an exception off the top most layer, which is read, write type. But we will get into that later. The doctor images can be recognized either by their unique image i D, which is provided by DR or a convenient name or tag, which exploited by us, means users. Finally, they can be pushed or pulled from Docker Hub, which we just visited in the last demo. If we want to visualize the layers off a docker image, they would stack up like this. We start with the boot file system, which is pretty much similar to Lennox's own boot file system. It is an arrangement off See group name, spaces and resource, a location which virtually separates the image from rest of the files on the host or cloud . On top of that, we would help based image layer, which along with the layers about it, will follow the file mapping laid out by boot file system. Leah. Next, we have layers such as work directory, environmental variables. Ad copy exposed. CMD etcetera. Speaking off intermediate images. Here are a few points to remember. First of all, as we have mentioned earlier, intermediate images are created out off individual docker file instructions, and they act as layers off mean image or result in image. All of these intermediate images are read only. So once the image is built, these layers will not accept any change whatsoever. They have separate image idea off their own, which can be viewed using doctor history Command. If you're wondering, why does a doctor have existence off intermediate images in the first place? It is for cashing. For example, if you're building two different images from the same base image like Engine X and Apache on top off open toe, the base image layer will only be downloaded once and will be the used when it is the same . To make this cashing simpler, we have intermediate images where each layer has its own significant identity, and it separates itself from all other layers in terms off usability. But the intermediate images may not be used on their own, since they would not be sufficient to run a container process by themselves. For example, even the smallest image would consist off at least one base image, and one seem the entry point instruction. Finally, they're stacked as a loosely collective read only layer by a U. F. S, which is a union file system. 21. Demo: Working with Docker Images | Search, List, Push, Pull and Tag: First of all, we have Dr Search Command. It is used to search images from Docker home just to clarify, you don't need to have a doctor help account to search Reports from your host are even pulling them. It is just a requirement to use the Web interface off doctor, huh? Or for pushing repositories on it. As for the same tax off this command, the freeze doctor search is followed by the name off the image. An optional version number After Colon. Let's execute this command. Here we get a list off fightin images sorted by the number of stars. Of course, many off them are frameworks built on top of fightin. Since beytin would be one of the key words, there are description off images to provide more brief inside and a check off whether the image is official or not. Here, the first image has the most stars, and it is also the official image. Next, we have quite a special case. Doctors Search registry command gives official image off Docker registry from Dr Hub. If we don't want to get such a long list off repositories, we can also put filters on our search here we hope Put freely there is hyphen official equals True, which will only show us official images. There we go. We only got one image sweet, right for those who like their results need en tidy. Doctor also lets you format the results off the search. Here the format is mentioned in double inverted commas and it starts with the keyword table , which means we want a tabular format. Then we have entered the desired feels that we want. The fields are mentioned in double curly braces and they're separated by back slash D, which times for tab. What space character? You might have guessed by now that this will create three columns, one off each field. Now that the predictions and wish lists are done, there's under command. There we go, are crowded, little table is here and it is showing the same repositories as before. Just in visually different format. Also noticed that we only helped three fields that we had mentioned in the command and rest of the fields are skipped. Moving on from doctor search, we held Docker images command. It is a shorter version off docker images, a less common and both off them do exactly the same thing which is list out the images on your host. As you can see, these are the images that we built during our previous section. On the other hand, if we want to list out versions or instances off particular type of image, we can mention the image name followed by Docker Images Command. Let's try and list all our open toe images here. We can also see the size of the image, which denotes the size Dick currently occupy on the storage off host machine Off course specifying the version number preceded by a Kahlan narrows down the list just to one entry . Furthermore, if we want to see the full parts off truncated data like image I d. We can use hyphen, hyphen, no hyphen, trunk, flag as well. But be cautious while using it, since it can make the results messy, Really messy. Then we held docker. Pull it, Busta specified image from doctor huh Door knocker host. Here we have provided engine X with Colin latest attack. So which our image will have the latest tag on Docker hubs and the next repository will be pulled. As you can see, it has downloaded a newer version off Engine X, which is latest instead off latest. If we use engine X colon, Alpine doctor, Hubble provide an image with alpine tag. Now, if we grab a list off available engine X images on our host, we get too often. First is the Alpine one, which we just pull, and second is the latest version, as you can see both off them very majorly. In terms off size, Alpine is like minimal engine X image, which is smaller in terms of size, since Alpine as the basis itself is smaller. Finally, if we want all variants off engine X images, say, for testing purpose, we can hit the command with hyphen, hyphen, all tax flag, and we will receive the missing images from the repository once we list the engine X images . Now it is clearly visible that these are different versions but different sizes. We're back to our doctor Hub cash port. Let's click on create repository option so we can make a repo and push images to it on the left pane. Docker is generous enough to list up the steps to create a repo. First off, all were supposed to provide a name space for our repositories so that we don't have to make the name unique across the globe. Generally name space is same as the user name. Now let's name or repository. We're naming it. Report hyphen, Engine X. You can name it. Anything you like. Next step is the description off the people here. As you can see, we have given a short and sweet description about the city pool. If you want to describe your report in much more detail, you can jump to the full description section off this report. And in the final step, we can set the visibility permission for our report story. Dr. Offers one free private report An unlimited public report with free doctor have account. So do your choices wisely. We don't need private reports for now, so we will select the public visibility for this people. Now let's create the report by pressing create button at the end Off the page, we have successfully created our report Engine X, as we can see that there are some taps above the short description off the repo. 1st 1 is report in four tab. It displays the basic information about our repo engine X, such as it's visibility, which is public and short description about it. 2nd 1 is dags. You can add multiple images under a single people separated by different tags. If you do not specify any tack for the image it will buy before take latest attack. 3rd 1 is collaborators. It consists off a user or a list off user whom the owner off the private report wants to grant the read, write or admin access next, and the 4th 1 is Web Hooks. Web Hook is a http callback post request. It can be used to notify user services or other applications about the newly pushed image to the report. Last one is the settings off the repo here user can change the visibility permission off the report and can also delete the report from users. Talker help account permanently now. As you can see, you can pull the images available under the report Engine X repository. By using the specific docker, pull common doctor pull civilian canvas slash Report hyphen engine X and store them on your machines. Since this is your first ever repository created on Docker hub, let's self indulge ourselves by giving it a star. Starting the people is a way to show that you like the repository and you can remember it for your future references. Now let's switch back to the terminal before pushing an image to docker registry. We need to log in again to Dr Help using Docker Log in Command Interactive Lee Here we have been asked to enter our doctor Hub log in credentials. We will enter a user name, which is truly in canvas, and it's password we have successfully log in our account with a warning with says that our doctor have password is stored unencrypted in conflict dot Jason file on our machine for future references here. Okay, voted for now. So we will ignore the warning and proceed to the next step. Now we will attack a local image Engine X, Kahlan latest into a new image. We will specify where we want to push this image. We can write the host name on which the registry is hosting, which is civilian canvas for us. Now we'll mention the registry name in which we want to push the image that is repo hyphen Engine X. You want to give your own custom tacked to the image, such as CC hyphen engine X, for this example are If you don't mention any tag for the image, it will take latest by default. This two stage format is meditated to Bush, an image to a public repository. Now let's check out or newly tag image by listing all images on our machine. Dad, you are. We have original engine. Next latest image and newly tacked civilian canvas Slash Report. Hyphen, engine X Colin CC engine X image. But did you notice something? These two images have the same image I d. It is because Dr Tack Common has created an alias for your image as its new image name so that the original image will be untouched and all off its changes can be performed to the new earliest image. Now let's push the civilian canvas slash Report Hyphen Engine X Colin CC engine xto. Our report Engine X using docker push common. We have already specified the part for the destination location in image name. As we can see, Doctor is pushing each layer off the original latest image. Actus end. On the other hand, docker demon with stack all of the's layers sequentially and create a new image with the tag CC Engine X in the report Engine X At the end off the process, we got a new image digest. Identify off the push image. Now let's switch back to Dr Help account to verify that our report has been successfully pushed. Who will navigate to the report and the next repository Go toe tags and we have successfully pushed the image, Image tag, size and a belated name are mentioned here. In next lecture, we will dig deeper into the image by inspecting it and looking at its history. 22. Demo: Know your Docker Image | Inspect and History: as we know that Docker Images Command will list out all of the docker images stored in our machine with some basic information such as Image I D Repository, name and image Tak Tau Identify different images. But what if we want to know more about any particular image? Well, for that, we held doctor inspect command doctor, inspect common returns information about every single doctor object who has contributed in the creation off a particular docker image, which can be very useful at the time of debugging. Let's list out all of the open toe images available on our local machine. My writing command Dr Images Open to and we are. We have four open toe images with different image tags under open to repository. Let's inspect open toe Colin latest docker image type docker image. Inspect command followed by the image name that you want to inspect. We will type woman to Colin latest here, press enter, and as you can see, it has displayed the detail information about the latest woman to image in Jason Terry. Here we can see the extended image I D off open to latest followed by report, name and report. I chest which is the 64 digit hex number. Next, we help container identify. Don't confuse it with the containers running who want to image. It is the intermediate container which doctor has created while building the open toe image from docker. File. Container Conflict is the configuration details about the same intermediate container, which is stored as images. Meta leader for reference. Next is the information related to scratch image and its architecture, which is used as the base image here. It also mentions the actual and virtual size off the final image. And at last we have Root FS identify, which shows digest off all and immediately us off this image. If you want to access a specific detail about an image you conform at the output off. Doctor Inspect Common type Doctor Inspect, followed by the former tag Freud Arguments to format flag between inverted commas, report, tax and report. I just separated by Kahlan at last type docker image, name, press enter and as a result, we got the report back and report I just off woman to latest. We can also see of the inspect reserves, often image to a file in Jason format for future references here. We want to store the configuration details about this image in a text file. To do so. Type Docker image Inspect format followed by Jason Not conflict in double inverted commas and curly braces who want to and store the result In inspect underscore report underscore open toe dot txt file. It is just a name that we have given to the file. You can give any name which you want. List out all of the available files. Inspect report. Open toe has been successfully created. Let's check out the contents off this file. Conflict. Details about latest open toe image is available in the text file. If you remember root efforts, identify in the inspection off, open to latest image showed only that digest off all intermediate Leah's in the image based on only digests. It is difficult to determine how the image was built. For that we have darker history. Command Docker History will show us all the intermediate layers, often image. Let's find out the intermediate layers off this image type docker image History who went toe in terminal? We got all the intermediate Leah's for our latest open toe image. These layers are stacked sequence chili starting from the base image at the bottom to the CMD layer at the top. Off the results. All the layers have their associative image, ID's sizes and their creation. Time to dig deeper into this. Let us find history off one off the image which we have built on our local doctor host, who will find history off i. M g Underscore Apache Now type docker image history, followed by the image name, which is I am G Underscore Apache and press enter. You might be wondering why some off the rose off image column in both the reserves contained missing and some off them have their image. I ds. As you may remember, the intermediate image ideas are given to the layers created by DR Five Instructions, and they can be used for cashing purpose by our own Dr Host. But if a images pulled from Docker Hub, such cashing would not happen, and since it may cause environmental clashes, so we are not provided any image ideas for intermediate Leah's off pulled images, All we can know is they exist. We have two types off intermediate images which are easy to distinguish one which are built by some other doctor host, and we have just used it as base image and the ones which are committed by our instructions . You can also identified them by the time they were committed the base image immediately. US have 17 months old, whereas the other ones are committed just a few hours ago. 23. Demo: Clean up Docker Images: having unnecessary images lying around our host can be quite a border. Firstly, it consumes a lot off disk space and having multiple version off similar images can cause confusions nonetheless. Let's list out or available images. Just take a look. The list is already exhaustive. Time to narrow it down a bit to keep things neat and tidy. First, let's use our, um or remove command. We will remove an image with one hyphen alpine pull tag. As you may remember, these images were pulled as a stack off layered intermediate images, so they will also be removed. Similarly, all of the intermediate images along with the resulting image will be removed from our host just to verify. How did our command do? Let's get another list of images and we shouldn't find any image with one hyphen Alpine pull attack. Another way to write image RM is to simply write at M I and follow it by image i d. When views image I d. Instead of image tag, all images containing that I d will be removed here. One hyphen, alpine and Alpine variants off engine X image will be affected by this command On the other hand Such an operation involving I D off the image, which is used more than once, cannot be performed normally. That's why we're getting this error and the suggestion to remove them forcefully. Let's do so. We will use the same command with four stack as you may notice all of the images. With this, I d will be freed from their tag and they will be removed along with the intermediate images. 24. A Container is born!: we are done with both Docker file and Docker images, So now it is time to pay our much needed attention to extend the point off the scores. Cadenas We have already seen the formal definition off containers, but if we consider our updated knowledge, the simplest way to describe contain a would be are running instance off a docker image, you can compare it to the analogy off process and program In Lennar's, just like a process is a running instance. Off a program. A container is a running instance often image with help off name spaces on the Lennox. Host containers provide similar isolations. Like we, um, each container has its own file system, network driver, storage driver and administrative privileges as well. Despite off all of this, any container would be at least 100 times lighter than the Williams hosting the same set of Softwares we have seen previously that docker images are made off. Read only layers, and the top most layer is right herbal. Well, this top layer is roided. Do it while creating a container out off the image with correct network configurations. Containers can also talk to each other. Why I peace or DNS. It also follows copy on write policy to maintain the integrity off the docker image, which we will explore soon. You may wonder what exactly do we mean by running the image? Well, much less to the surprise run can be defined pretty simply. In our context, it means writing resource is like compute memory and storage. 25. Container Life-cycle: Ah, containers. Lifecycle s pretty much similar toe A processes life cycle in Lenox because after all, a container is just a running process. Instance off a doctor image. We start with the created states which can be a part off doctor run command or can be explicitly caused by Dr Create Command. If it is a part off run command, it will automatically lead to the next stage which is running state. It means that created container or the shed yule process is running and re sources are being actively used by it. Alternatively, if a container is explicitly in created stage, it can be sent to running state with start. Come on. Next is bossed stage which won't occur on its own. For the most part, you can strategically cause it with docker container Pause command and resume its similarly with a NPAs command to contain a process will goto pending states and once resumed, it will be back to being up and running. Next is stopped stage, which means the process off the container is terminated. But the container i d still exists so it can be re shield without creating another convenor and registering its I D. This can be due to multiple reason it can be caused by an era restart policy or simply container. Having finished its run to completion tasks, we can manually stop and restart containers with docker containers, stop and restart commands, respectively. Finally, we have deleted stage where the terminated container is removed and its i d is freed up. It will stop appealing in the list of containers to expand further on multiple containers from single image. Considered this Bagram, the read only Leah is common, and the read write layers are fetching data from it. This does not cause any data corruption. Since the data off read only layer is not going to be modified in the first place, and the system just has to perform multiple read operation on the same data. This optimizes storage off doctor host. Where is the number off running containers from the same or different image on a single host will always depend on hosts architecture limitations like memory and processing speed . Another important aspect of containers is their copy on write mechanism. What's that? Well, it's pretty simple deal. Now we have seen that credible layer off container is mounted to the read rightly off Docker image. Well, that was true, but it has a little secret to it to read. Only layers filed themselves are untouched. Ah, copy off them is created and read rightly is mounted on that copy, which makes it easier to recover the layers in case any unauthorized host file system access or condom damage. 26. Demo: Container Run Vs Create: Let's test out both of these commands with a busy box container. First, we will use docker container. Create command. It is followed by hyphen I D tag, which means it will be interactive and teletype enabled. We haven't given it detached flag since we don't need toe, we're naming or container cc hyphen. Busy box A and we're using image busy box with latest attack when we run the command. Since the content off the image is not available locally, it will be pulled from the doctor hub once it is pulled. What you see at the end is the unique container I d created by Dr. The idea is unique at least across the host and cluster. If you're running any now or container should be created to list Argentina's, we have to run the command docker ps hyphen A And once we do so we get a list off. All the containers which are running are about to run on have finished running on this Horst. The output layer is fairly simple and the top most entry is our recently created container . It is not in the running state yet, which can also be verified from the Status column. It is followed by quite a few other containers which have finished running and have exited some time ago. Here the resources are already ready to be allotted to the container, but haven't been allotted yet. Don't worry. We'll let this container enjoy its dream run as well. But before that, let's see what happens when we run a container. Instead, you might find this command similar to what we have used in some of our initial demos. It is because this is the most to mainstream way to run it this time. We also put that the flag so we don't have to dive into the container. And we have named it cc busy box B. Since we had already pulled the busy box image last time, Doctor has cashed entirety off it and has simply returned a container. I d If you're wondering, why do we have an R M flag tagging along? It instructs Docker to delete this convener after it has finished running. Let's check out Dr P s hyphen e again and what we see is our top entry replaced by busy box be container. Unlike its counterpart called busy box A This one is earning for six seconds. In fact, there is also a three second difference between its creation time and just running time. You can assume that doctor took that time to allocate the resources and register it as a process to its host. Since we have our containers running, we'll play with it a bit more in the next lecture. 27. Demo: Working with Containers | Start, Stop, Restart and Rename: Let's start over Demo, where we have ended the previous one. The list off containers is still the same just the time duration has updated. In previous demo, we had created the container called CC Busy box A, but we did not run it now to send it into running state. Let's use Docker container Start Command, followed by the name of two container. We don't have to provide flags like ideally, since they have already been passed during the create command. Let's run it. We won't even get a container. I d here. Since that two had been generated previously, all we will get is the name off the container as a nod to success off the command in typical doctor See lifestyle. Time to get repetitive and list out the containers again using docker PS hyphen A. And we haven't update are created. Container cc Busy box A is now finally in running state, just like start. We also have a command to stop the containers. Since a has just started running, let's stop cc busy box B again. Ah, confirmation signal is the name of the container, and if you want to verify it, let's list our conveners again and wait, where is RCC? Busy box beat? Does that mean that isn't ETA? Well, no. If you remember, we had applied. Aflac called Adam in our last demo with Docker run common on Sisi Busy Box Beacon Dana, which meant that the container will be deleted once in once it has stopped running. To use this simple. If you want to reuse the container, keep it. If you don't want to use it, remove it and free up somebody. Sources. Next we held a restart command. Let's restart our CC busy box, a container. We'll also give it a buffer of five seconds. And when we verify it, what we get is a freshly started container up and running. Finally, I think all of us would agree that Sisi busy Box A was not that great off a naming convention to follow. It's just Lindy or complicated and bland. If you encounter such thoughts with Jurgen Deena's, we have a command to rename them. Let's be a bit more casual and rename cc hyphen, busy box hyphen A as my hyphen busy box, and when we list them, we can see the change is reflected by the way notice that the container has just been renamed, not restarted, which means we can rename them almost whenever we want unless they affect some other containers. In next lecture, we will do something more application related with our containers. 28. Demo: Working with Containers | Attach and Exec: just like previous demos. We have a list of container here Now let's use docker container attached command. It means that we are attaching the standard Io and standard error off or container to the terminal off our doctor client. We have attached my busy box container here, so let's hit enter. As you can see now we are accessing standard. I owe a terminal off busy box from our open to terminal. If we hit a less, we will see a list of available directories in busy box route environment. We can play around a bit more to navigate to other directors as well. If we exit it written back door open to host terminal and there is an interesting aspect to the attached command. When we list the containers again, we can see that my busy box container is not running. It has exited a few seconds ago. In other words, attaching the container conditions it to be stopped when we exit the attachment. An alternative to this is Dr Executor meant it allows us to use any command we want and it executes two container. But before let's start our container again. Now we have used Doctor exact, which stands for Execute with hyphen i d. Flag on how directed it to run and print the result off PWD Command. Once it succeeds, we held a forward slash, which indicates route off our busy box. Unlike attach. If we list the containers again, we'll find or containers still up and running. 29. Demo: Inspect and Commit Container: it is time to know our containers in greater depth. First, we have the list of containers just to avoid any confusion. We have run an open toe container after the context off. Last demo. Let's get more information about it with Doctor Inspect Command followed by the container name. What we get as output is the Jason description off the container. We don't need to be intimidated by the sheer amount of information as well. Interpret them one by one. Starting from the top, we have container I d provided by Docker Timestamp off container creation part where the container is running. No arguments since we haven't provided any in the state backing off the container. We have indications off the fact that over container is in running state and not paused or restarting our debt, then it has not been killed by going out off memory. It's process I d on you. Bento is 6 94 Then we have information about the image in Terms Off Image Digest and we have various parts such as host, part log, part and configuration part. Then we have another bunch of information where most off it is irrelevant to this particular container so they're either null are empty. But which do matter are the name off the container and the fact that it has not restarted yet. Following this? We also have network volume and other information which might be useful to you once we proceed further in this course. For now, we can focus on finding specific information from the Inspect Command. Since even if you get completely familiar with all the attributes, reading them every time can be really daunting. Let's use format flag with her, inspect command and narrow around the results to just i p address. We can do this by narrowing the range to networks and under network settings. Choosing the I P address field. There we go. We have the I P address off our container. Next is the Summit Command. To effectively use that command, we need to make at least one change to the container state after it is created from the image. Just to remind you, this urban two container is created from the same image that we had pushed on our doctor Hub Repo. Let's execute it with bash. You should already be used to this command by now. Let's verified by listing or the directories. Yes, we are in the container. Now Let's run an update. The purpose here is just to change the state off. Contain A from men. It was created once the A plate is complete. Let's exit it now let's use Dr Commit Command, followed by the container name, which is my open to and update name in the format Off Doctor Hump People images. We have kept it as updated Cuban to 1.0. Once we entered it, the oblate will be committed to our doctor Hungry people. As you may have guessed, it is essential for us to be loved into our doctor. Have account to use this demo. The updated container is committed as the image as it's read, write lier turns read only and is stacked on top of previously us off the former image. So instead of containers, if we list out the images, we can find the updated one, which can be directly then as a container, and we won't have to read under a great command. This helps maintaining diversions off docker images. In next lecture, who will learn about port mapping 30. Demo: Container Exposure | Container Port-mapping: in this demo will map our host machines sport to contain a port. The command is fairly simple, as we just have to extend the run command with a flag. We'll map our hosts. Port 80802 Container Sport 80 on TCP by mentioning it following hyphen. B notice that the imagery used here is the one that we had created while working with exposed instruction. Now, when we run the containers, we'll get the ports mentioned in the airport. The output looks a bit messy, but the annotations should help here. Now we'll create another grand dinner from the same image called con engine X hyphen. A. Instead of providing ports and protocols like earlier this time, will just provide capital hyphen p and allow docker to map ports by itself. Here it will use the information provided by exposed instruction in the docker file and tally available pours from host machines network drivers. We can see that the new container has port 80 mapped from container to port, 32,768 off host. We can also view this information by hitting Docker Container Port Command, followed by the container name. Finally, when we learned local host on Port 8080 on our Web browser, we can see Engine Ex home page, which indicates that our port mapping was successful. When we do the same with the other container, it shows the same thing as well. In next lecture will clean up or workspace. 31. Demo: Container clean-up | Prune and Remove: So in this demo we will learn different ways of removing the containers. Let's list out all off the organ donors. And, yes, there are quite a lot off them. In fact, many off them are not even that's significant at the moment and should be removed first. We held a basic Adam Command, followed by a containers name. Here we have preferred a stopped container. Kant underscore from once. It is a mood it will disappear from the list. Then we have the same Adam Command. But instead of providing name, we have provided the container I DS off the stop containers, and the result is the same. They disappear from the list after being removed. The case will be a bit different with the running containers. Just to make sure that we are not making any mistakes while the leading running container it asks us to provide the forced demolition flag, I would say it is a kind gesture, since it avoids potential unforced errors. As we add the forced flag, nothing can stop us from removing it. If we want to be kind to contain us and want to kill them properly, we can send the sick dome signal using docker container Kill command. But as you can see, we still have quite a few containers turning and we don't need to stop ones for the most part, to remove the stop containers, we have a command car docker container Prune. It is a short and sweet common and doesn't require any names. Our i ds. It will simply kill all of the dangling containers and free up whatever resource is it can . We had three off such containers which got removed, and we got 1.8 megabytes off free space. Finally, our list off containers only contained the lie ones in next model will go deeper into networking. 32. Multi-container Applications and Introduction to Networking in Docker: till now we have played with single containers in are Yours. But even if we did use more than one containers, they were completely independent off each other. For example, one container might be enough to host a static landing page, but a smartphone app would definitely require more than one containers, but each of them may self specific purpose. In such a case, information exchange between containers become a crucial factor off overall performance off the application. In other words, they need to talk. The communication can be 1 to 11 too many or many to many. In case of docker containers, These communications are managed by objects called network drivers to define them simply a doctor Network driver is a piece off software which handles container networking. They can be created simply using DOCKER Network Command. No images or files are required. Speaking off networks these networks can spawn from single host instances to multi host clusters. For now, we'll focus on single host and we will visit cluster networking and we deal with Docker Swarm. Dr. Network drivers are quite reliable, since DACA itself uses them to communicate with other containers and outside world. This also means that Dr itself provides some native network drivers. If we don't want the water creating ones by ourselves as a trade off, it means less control or I p ranges and ports, apart from the networks we create and the default ones. Doctor also supports remote network drivers, which are developed by third party and can be installed as plug ins, although they are still quite under growing states. And mostly they're useful for specific use cases like enabling networking on a certain cloud provider apart from network drivers. Doctor also Floyd's I. Pam R. I. P. And Tress Management Driver, which handles I P address ranges and distributions if they are not specified by the admin. I know you have loads of questions like, How do these networks work? Are there any types? Is there any structure which they follow? Well, we will explore all of the's details in next lectures when we study container networking model and types of doctor networks 33. Container Networking Model (CNM) of Docker: Let's dig deep into container networking model. First of all, we have host network infrastructure. This include both software and hardware infrastructure details like using Eternity or WiFi and host oils. Colonel and Work Stack in our case, Lennox Network Stack. On top of that, we have Dr networking drivers, which include Network and I Pam drivers. We just recently stated that functionality briefly in last structure. On top of these drivers, we held docker engine, which creates individual network objects, as you might have guessed user defined on before containing metal objects fall on top off docker engine. Since their provision by it, these blocks are a part of DR itself. On top of for container network, we held running containers which are accompanied by at least one endpoint. I said at least one because it is normal for container to help connected to two or more networks and hence consisting off Morton. One endpoints speaking off endpoints. Their container side connected representation off virtual Internet, which is the common protocol for networking across darker. They contain networking information such as I P address, virtual physical Andress and ports, as mentioned earlier. If a container is connected to more than one networks. It will have more than one corresponding endpoints, which will contain different I P's. The scope off these I peas would typically be limited to the host in case off single host implementation within the same scope. If two containers are connected to the same network, they can also communicate wire. DNS were container names can be used instead of I P's container networks. Ploy this information to network and I, Pam drivers, then network and IBM drivers translate these requests into host network supported packets and transmit them to make sure containers can communicate to the outside world. Because if that doesn't happen, forget engine X. You wouldn't even be able to execute after get update command properly. So this is how container network model works. In next lecture, we will look at network driver types in detail. 34. Docker's Native Network Drivers: out off NATO and remote network drivers were going toe work on native drivers. Native docket BET. Truck drivers are used in creation, off default on user defined networks. Do you remember this diagram from previous lecture? Let's shrink it a bit for the convenience now that's considered the first type of network the host network. The idea is pretty vanilla here. Network credentials off the host are directly reflected on the container and point, which means containers connected to this network will help the same i p as the host itself . This doesn't mean that container with abandon, their true nature toe getting a bit more practical. Let's say we helped to containers connected to the default or user defined host network. In this case, both containers will communicate where virtual Internet, reflecting the capabilities and limitations off the host machine. Moving on from host, we helped bridge network. It is also the deformed network for docker containers. If we don't explicitly connect or containers to any network, they will be connected to the Default bridge network. The name off this network helps a lot in defining its properties. It creates a virtual eternity bridge all of the containers connected to this network are connected to this bridge, where container and points the bridge communicates to the host network. It means that the containers will be isolated from host network specifications. Containers will have different eyepiece than host. We can define the I P Range and submit mosque for the bridge and subsequent networks. But if we choose to opt out from this decision, I Pam drivers managed this task for us. We can think our address these containers using the I p exploited by the virtual bridge. Off course. The communication will pass through the host machines Network means if it is down, but it won't be able to do much amounted. But this can help us hiding the DNS or i p off the host in recent version off Docker E 17 and about. We can also use container names toe addis them when we're communicating within the same doctor, Bridge network will practically explore these networks more in dem electors. Further, we have overlay networks in case off all a network. We do need to come out of the cocoon off single host locker infrastructure in industrial usage off docker community, our Enterprise edition. You will most likely find cluster or clusters off docker host, which will run single, connected, or at least the relevant set of containerized applications. Such an arrangement is called Swarm, Moored in Docker. Swarm heavily relies on oil in a truck provisioning off darker. We're yet took over swarm in our course. But do not worry. This explanation will not flood you with unknown form terminologies in case off bridge network. All we had to worry about was containers I P. Since we had only one host. But with all the network will have multiple host having multiple containers where any combination off communication might be necessary. So while establishing or performing container to container communication, our network driver can't get away by just keeping track off containers. I p. It also needs to shout its communication to the proper host. To solve this overlay network will help two layers off information underlay network information which will contain data regarding source and destination off horse. I'd be and overly information Lear, which will contain data about source and destination containers. I p. As a result, the communication packet header will consist off. I p addresses off both source and destination hosts and containers. If you look into it practically when we introduce warm 35. Demo: Create Docker Networks: in this demo, we will create our first Doctor network and understand it. We will do it by using doctor network, create command and furnish it with driver flag. Our driver for this demo is a bridge network. So we will pass the argument bridge and finally we'll give it a suitable name. My bridge. What we get as a result is an i d for the network object which has been created. Now before we dig deep into my bridge, let's create another network called my Bridge. One will provide a few more perimeters with this one for better compassion. Apart from the previously provided flag driver on its Value bridge, we have also provided the sub net and I'd be range again. We received another I D. Let's list these networks out. As you can see, my bridge and my bridge one are not the only available networks on the list. That is because Dr Roy's us A. Set off default, created networks using different network drivers hit the our bridge host and none you can tell by the names that bridge and host are using corresponding network drivers. None is a special case toe. It is used to indicate your isolation and lack of connectivity. We can also filter the search by providing the filter tag. Let's put the filter that we only desire bridge network so the driver field will be said to bridge and here we have all the networks created with bridge network driver. 36. Demo: Working with Docker Networks | Connect, Disconnect, Inspect & Clean: In this demo, we will connect one off or containers with one off the networks that we have created. First of all, let's see if we have any running containers. The container should be interning state, since network object connectivity in Docker follows the rules off inter process communication in Lenox, which means if there is no process, nothing can talk to it in terms off networks. As we can see, we have to off our spare containers from previous model, but both off them on an exit state. Let's start my Cuban to contain a Now keep a list off networks in front off us to make better decisions. We will use Docker Network Connect Command, followed by network name and container name and hit Enter. We don't get any sort off response like network I D or container I d. From Docker. So a fair way to verify the connection would be to use talker. Inspect command after using inspect on my open to If you navigate to the networking fields off the output, you can see that we have description off bridge network, my bridge one attached to my open toe container. And it also has the alias, which is same as the one we had received after creation off tat bridge network. You can also notice the end point, which is described with an endpoint i d and the one next. Come on. Instead, off using a separate command to connect the Doctor Network. You will mention it along with the Run command using network flag. Here we are providing host network to the container name Kant. Underscore Engine X, which will be created from engine next image. Having the latest tag, notably if you run Docker Container Port Command with corn underscore engine X. You won't receive the port mapping information since no port mapping takes place with host network driver container communicates to Internet using port off host itself We can you more information about this horse network using inspect command on container. And as you can see, we can get network I D and endpoint details off the host network instance. Just like in previous container Here, too, you can notice a field named bridge under network settings. This field is empty. The reason is, if we do not provide any network manually, Dr Price, the default president, work to every container. No Let's inspect the default bridge network. It seems that it, too, has its end point. Submit, and I'd address range. Now, if we look at the containers field, we will find my open toe or, to be precise, only my open toe. The reason why corn underscore Engine X is not listed here is that it is connected to the host network, Dr The Next a container toe, one off the D Ford networks. And mostly the priority is bridge. Unless we mentioned otherwise. Explicitly, not the I P address off my open toe under D for Bridge Network, which is 172.17 dot zero dot to. Now let's inspect user defined bridge network in our case, my Bridge one network. It has similar parameters compared to default bridge. Apart from different endpoint, I be Range and I ds. It also has my open toe container connected to it. But the I P is different from the default bridge. In other words, my open toe container can be accessed from both the networks using corresponding eyepiece. We can also format the output off, inspect command like we used to do it. Previously, let's grab the value off scope field off the Fort Bridge network are we can grab a set off i D and name for the same as it is visible in the output. The first entry is the network I D. And the 2nd 1 followed by a Kahlan, is the network name. Now let's list are containers again to see what to do next. Well, we can see what happens when we disconnect a network from Container. Let's use Doctor Network Disconnect Command, followed by network name and container name, which are my bridge one and my open toe. In this case. Finally, if we inspect our network, we can see that container My open toe, which was previously mentioned. There is successfully out of sight. Similarly, if we inspect the container, we won't find the user defined network eater. 37. Demo: Ping one Container from another: in this demo, we will finally see the results off our doctor. Networking Hustle. Starting off, Let's follow our standard practice off. Getting a list off. Doctor networks were quite clean. All we have our default host bridge and neural networks not discreet. A bridge network called Net Bridge and provided sub Net and I p. Ranges as mentioned in the command. One status done. Run a container called Kant underscored database from Reedus Image and connect it to Net British network. That's fetch its I p. Since we will be using it later in this demo i p off This container is 172.0 dot to 40 Talk one. Let's on another container from busy box image and call It's over. A. This one is also connected to net president work just like previous one. Now let's inspect own Net Bridge Network to find which containers are connected to it. There we go. Both corn database and silver A are connected just as we had expected for the more so is I . P is 172.20 to 40 dot to following the range which we had provided Run the third container also from the busy box image and call it server. Be notice that we have not mentioned any network whatsoever, which means it will be connected to the default bridge network. We can also verify it by inspecting its network information. And while we're at it, let's not its I P as well, which is 172.17 dot 0.3. Now let's switch to view a little. We help three terminals, which will be using for three different containers. If you don't want to go through all the trouble, you can use multiple terminals and keep on switching between them or can run them on multiple displays. However you feel comfortable. Let's execute gone database container with Bash common. Once we have navigated to the route off the container, let's start toe Ping Google. Oops! It seems like thing is not installed in the base image off readers. So let's go ahead and fix that. Ran a generic update and install Ping I P utility with this command. And once the installation is complete, next resume where we had passed the flow off this tutorial being Google. I love saying this being Google bing Google. There should be enough. That's block it with control, See? And what we see is successful being with no packet loss. Now, if you remember, we have noted the I p off all of the containers server is I be was 172.20 to 40 dot to Let's being that it was a success. It means to off all containers. Just talk to each other without any sort of packet loss. Since they're connected to the same bridge network, this communication was more or less I PC or inter process communication within the Lennox host. But considering the isolation they have opened, it can be treated like two ends. Often, application were communicating. Going further. Let's go on another terminal and execute server taken Dana Thing, Google and Cont database container from it. Both of them will be successful, since Bridge Network allows containers to communicate to external world using virtual Internet and containers connected to the same network can talk to each other using their endpoints. Lastly, let's run so RB container, which is connected to default. Prison network, not user defined. Net bridge one. If we try to open Google, it is a success. But if we try to make other containers, we would fail, since they are not connected to the default bridge at the moment. On the other hand, even if we use DNS names off the containers instead, off their eyepiece, containers connected to the same network will face notable swell pinging each other at all . This explains and demonstrates capacities and limitations off Origen it works. 38. Never lose a "bit" of your data!: - From 1/3 person's point of view, this may seem like a funny story, but it can potentially cost you your job. That's the prime reason why we need efficient stores solutions with containers. The logic is pretty simple. Containers data needs to be backed up somewhere as a permanent storage. And a quick question that will come up in your mind would be what on which details should be backed up. To answer that, we need to look back at the layered structure off docker image and container data. If you remember, we helped to types off layers, laid only layers, which hold permanent data and is never modified. Utopian right policy and read write layers, which hold temporary or wallet. I'll data if a container stops or dies the wallet. I'll day now vanishes. So now we have our answer. We need to back up the important data from the wallet. I'll read right lier off the container. Now. The next question is where to store the data? Well, just anywhere. Do you want to store it on some machine, which hosts Doctor? Go ahead. Do you want to store it to another server? Go ahead. Do you want to store it on a cloud, go ahead as well. And the last n genuine question Which comes to my mind. Is there any type off storage objects? Yes, there are most commonly used. College object type is called a docker volume in a volume. The container storage is completely isolated from the host file system, although the data off volume is sorted in a specific directory off the host, their controlled and managed by talker command line. Compared to other options off storage, which we will visit soon enough, volumes are more secure to ship and more reliable to operate. Let's understand volumes. Volumes are storage objects off docker, which are mounted two containers in terms off implementation volumes are dedicated Directories on hosts File system. If a containerized app is shipped along with the volume, people apart from the developer himself using the APP, will end up creating such a directory on their own. Doctor hosts Container provides data to docker engine and user, provides commands to store the data in the volume or to manage the data in the same. Although what container knows, it's just the name off the volume, not the part on the host. The translation takes place on docker machines, and so external applications having access to containers will have no means to access volumes directly. This isolation maintains the integrity and security off hosts and containers. Second option is buying moms. The exchange off information is pretty similar, apart from the fact that instead of creating a directory inspired by the name of volume buying mounts, allow us to use any directory on docker host to store the data. While this might be convenient in some cases, it also exposes the storage location off the container, which can make dense on the overall security off the application on the host itself. Apart from that, the other users, apart from developer himself, may not how such part on their host and creating so may not be under their privileges or comfort. Finally, we help them FS or temporary file system volumes and bind Mount let you share the files between host machine and container so that you can persist the data even after the container is stopped. If you're running Docker on Lennar's, you have 1/3 option. I m. F s moms, the nuclear a container with temper Fishman, the container can create files outside the containers rideable earlier as opposed to volumes and buying mounts. A temper fest Moan is temporary and only persists in the host memory, not in storage when the container stops, the temper FIS mount is the mood and the file certain there won't be persisted. The only sensible use case, which comes to my mind for 10 profess, is to store sensitive files, which you don't want to persist once the application gets deleted. Something like the browsing history, which get deleted if we use the incognito tab. Him profess mounts how their limitations they can be created, what ship? And they won't work on non Linux environments like Docker on Windows. 39. Demo: Working with Volumes | Create, List and Remove: in this demo, we are going to create a volume using doctor command line. Let's type the command doctor volume create, followed by the name off the volume. Here we are naming the volume wall hyphen, Busy box. Once the command succeeds, we get the name of the volume as the not off it being created. Before we do anything to this created volume, let's create another one. But this time, in a bit different way here, we're going to run a container using open toe image and we're going toe mount the volume wall hyphen, open toe on the containers D. M. P. Or Temp directory. Again, we will not do anything with this volume, since this demo primarily focuses on creation off the volumes. Now let's list the volumes to see what we have created. Let's type Dr William a. Less and as you can see, we help. Four volumes here do off them are created by us, whereas to off them are created by Dr using local volume driver, just like every other object which we helped created previously, like images, networks or containers. We can also filter the output off the L s common. Let's type Dr William LS. And put the filter off dangling equals. True, it means that it will list the volumes which are not being mounted toe any container here wall busy box has not been mounted to any container. Similarly, the one about it, which is provisioned by docker, is not being mounted, are used country. Also, we can inspect or volume just like every other object by using Dr William Inspect, followed by the volume name. And as you can see, we get the creation timestamp driver type labels which are not here Mount Point name off the volume and scope, which is local. Now let's try to remove one of the volumes which we have created. Type the command Dr William RM, followed by the volume name Hair were using volume wall hyphen open toe. As you can see, we get a Energis forms from Dr Demon. It says that this volume cannot be removed because it is in use, which means it has been mounted a container. So if we remove the volume, the container and it's performance will be affected. Let's get a list of containers to see which container is blocking our action off removing the volume. And as you can see, the tender noise container, which is built from the open to image just two minutes ago, has been mound with the volume wall hyphen open to. Although it has not been mentioned here, you can guess it since all the other containers are up for more than an hour ago. That's type the command docker container RM, followed by its name and tender Nice is removed. Now let's read on the command docker volume Adam Wall hyphen, Open toe. This time we didn't see any error and the volume should have been removed. Let's verify it by listing the volumes again. And yes, the wall open toe is not visible. 40. Demo: When Containers meet Volumes: in this demo, we're going to demonstrate the use off volumes which we have discussed in the theory. Let's start with creating the volume, which we had deleted in the last demo, which is one open to We'll do it by running a container from open to image called Kant hyphen Open toe. Let's see if both the volume and the container are available again. To remind you we can always check the container using docker container, inspect command and find the information about volume by formatting its output. As you can see, the container called Con Robin, too, has the volume wall open toe attached to it. Now let's execute organ. Dana Hendren Bash. Common on it You can notice that we are not executing it as a demon container, which means that once this command succeeds, we will jump right into the terminal off or container. Right now, this container is in its default state, which means that even if we delete it and spend it up again, nothing will change. So let's make a few changes to it, which will be reflected in its read Write top most layer, and if we delete the container, then the changes we have made now would be lost. The action can be pretty simple here. We don't need to do something so heavy, even a simple act off. Just updating the always can create enough changes to be recognized. So let's update this open to by typing app. Get a great command once it is abated. Let's change our working directory. You are hyphen log. As you may have guessed, this is the directory where open toe is keeping its logs. Let's list out the available files, and we have a lot of log files here. The purpose off doing so is to make sure that once we stop the convenor, we should be able to see the same files as back up on our host machine. And the reason for that is when we created this container, we had mount this directory to our host using the volume wall hyphen open toe. Let's exit the process and stop the container. No, let's have the root privileges on our host machine. And as you can see, we are on the same working directory just with root privileges. Now, as we have seen in the Terri section off volumes Docker stores the backup off volume data underwear, hyphen, lip hyphen, docker, hyphen, volumes directory. So lets navigate through it. And let's list out the content in this directory. As you can see, we have directories off all off the volumes created by local volume driver. Now let's navigate the wall hyphen open toe to see if the changes in the log file are reflected. Once we are in wall hyphen, open to directory, let's see its contents. And what we have is a data directory. Once we navigate toe that enlist its contents, what we see is a long list off log files, which means that the mounting off volume with the container was successful. So this is how we mount a volume Tokcan Dana and create a backup off its data to host using local volume driver. 41. Demo: Working with Bind Mounts: in this demo, we will test the bind. Moms. Let's create a directory called Bind Data on Our Doctor Hosts Home Directory. Now run a container called Bind You Bento from open to latest image and bind. It's empty or temp directory to the newly created Bind Data Directory using bind mount as usual, Let's see if the container is running. Yes, it is. Now it is time to inspect the bind Mount information and we have Mountain type, which is buying mount along with source and destination parts, which are just as we had provided them. For the more we have read, write permission set up toe. True, which means changes in the files will reflect on both the sides. It is probably the least secure way to mount a container data toe persistent storage, but for now it works. Lastly, we have bind propagation. It is an interesting aspect. Buying propagation is a policy which determines the bilateral access to the directories created within the Mount Point source and destinations. In other words, it will decide whether subdirectories off mount will be associated with amount or not are private is the default value, which means that any subdirectory within source or destination off MT. Will not reflect on either side. Let's execute. Bind 12 container with Bash Command and create a file called food dot txt. We are creating it within the Containers TMP Directory, which is the Mount Destination. Once we are done, that's exit the container. Now let's access the source off Mount Point, which is within home directory off Doctor host. We can see the Bind Data directory reflecting here. Let's open it And there we go. Full dot txt is present. Now let's try making changes the other way around. We have seen destinations update reflecting on source, no less. Update sourced to see if destination reflects the changes as well. Mind well that our container is shut at the moment and we are creating a new file called Hello dot txt. Let's go back to terminal and execute the container again so that we can navigate toe. It's the MP for temp directory hit ls to see the list of files and there we go. We had stopped the container with one file, but now it has to off them are Bine Mount is working successfully 42. Demo: Hosting Containerized 2048 game!: we are going to make containerized official open source to zero for eight on our doctor host live. And to do so the first step is get the files. We will clone this get report on our home directory. If you don't have get installed, please go through previous article. Once the Depo is clone, let's navigate into it and get the list of files. We have a bunch of files, including index dot html, which we will be using soon enough. Now run a container called 2048 from engine next latest image and used mine mount to mount our clone 20 for it. Directory toe html directory off and the next image. In other words, we are replacing the index dot html file and providing the necessary support for the new index dot html. As always, we are exposing containers. Port 80 to host sport 80 is zero. The container is up and running. Now let's open over browser and navigate to local host Port 80 80. There we go. We have our favorite 2048 on our Web browser and that to containerized. Let's see if it works properly, - do you? - It does. And it wasn't awesome. Expedience. Go ahead, try it yourself 43. Introduction to Docker Compose: Till now we have been studying the objects off Docker engine, but as we had mentioned earlier, darker ecosystem has more than one major component. Another such Iscar Docker compose composed, is a tool for defining and running complex applications with DACA. In case off working simply with docker engine, we need multiple DACA files for multiple parts or containers off a full fledged application . For example, we may have to create separate files for front end back in and other containerized blocks, which can be daunting to manage with composed, you can define a multi container application in a single file, then spin up her application in a single command, which does everything that needs to be done to get the APP running. You can define and integrate multiple doctor objects such as containers, networks, services, etcetera in a single file as blocks and composed will translate them to docker engine for you. In next lectures, we will have hands on experience with Docker, compose 44. Demo: Installing Docker Compose on Linux: as the title off. This demo suggests we're going to install Docker compose in this demo. We will do so by fetching the binaries off doctor composed from its official get hub release. And we will store this binary in Docker compose directory under user local Bin on our host machine. We'll do it with cool utility. Once the down loading is complete, we'll make these binaries executable and the installation process will be complete. Let's see if the installation is successful by running Docker composed version command. Well, the installation is successful and Docker compose was in 1.22 point zero. It's currently installed on our host. This is the latest version by the time this course is being created. 45. Demo: Structure of Docker Compose file: to work with composed files just like Second Model were again shifting back to commands and files together. Now, just to make sure, let's see what is our present working directory. And as you can see it, a CC underscored docker just to remind you again. The CC Docker has eight victories in total. Each one of them stands for a separate module. Currently, we're working on a six directory, so lets navigate there. And as you can see, there is a file called Docker Composed. Not Yemen will open this file. As we have studied in the theory portion, the composed file or the doctor composed file is a Yamil file, which defines multiple objects like services, networks and values. It is important to know that the default part for the composed file is always the present directory. Now, before we dig deeper into the darker, composed file itself, it is important to know a few bits and pieces about the Yamil files in general, where Gamel stands for yamma, we ain't markup language, and it has three basic data types. One his scale, er's like strings and numbers. Second, it's sequences which are a raise or list and third our map ings, which are hashes on dictionaries, which can be represented using a key value pair. The nesting off objects in a Yamil file is determined by indentation. You can find more information about Yemen files in the Lincoln below. Now, since we have got Tad Coward, let's dig deeper into this doctor composed file. First of all, let's mention the worsen off doctor composed that we're using, which is t 0.3 in this case. Next we help services services is the pattern object for the containers that we are going to create. If we are going to create a multi container application, we're supposed to use services. That's great. Our first service called BB. It's times for database now, just like we have been creating containers using command. Here, too, we need to mention a few parameters in terms off key Value pass. First of all, let's mention image. We're using my sequel 5.7 version. So we were right image as Key and my sequel 5.7 as value. Then we have container name, which is again a key, and my sequel database is the value here. The Walliams act as the parent key and the volume name and Mount Pot Act. As the Children notice the indentation between all the fields, the key feel or the parent feel his services, then we have further indentation for the services that we create DB or database. In this case, let's go ahead and mention the restart policy. We'll make the restart policy always so that we don't have to worry about the container being shut down and mind. Mint stands for environment variables just like Docker file here. Also, you can provide environment variables as key value pass by inventing them a bit further. We're providing my sequel, Jude Password, my sequel Database, my sequel User and my sequel, Password for Our Wordpress instance that will be created in the next service here. My sequel, underscored Database, which is going to be called WordPress, will be used as the name off the mice. Equal instance. Its root password will be word at Bad Press and my sequel, Underscore User and my sequel Password. The latter two keys are used toe allow WordPress to grant WordPress the access to the My sequel. Instance. Next, let's create another service in the same file called WordPress. Now look at the first key value pair or look at the first field, it says Depends on it creates an interdependency relationship between containers, which means that D be container needs to be created first, and WordPress will follow it later on. It is useful to create state full applications like this one. Here. The WordPress service depends on DB service and one status clear. Let us mention all the necessary feels for the WordPress container. We're going to use WordPress image Will name the container WD underscore front end. We're going to use the volume called WordPress Underscore files and we're mounting BAR did illegible w slash html directory to this volume. We are also mapping ports 8000 to 80 and we're mentioning restart policy as always, just like in previous service. Here, too, we're using environment variables. The database host is DB calling 3306 The WordPress DB user his WordPress and password is ABC at that 123 You can use any user name or password you like, but for learning purpose. This will do. Finally, we'll mention objects which are outside the boundaries off service or which are not the Children off services field. Such objects are volumes and networks. We haven't created any user defined network here. Neither have used any, so we don't need to declare them. But we definitely have used user defined volumes, so we need to declare them here using volumes key and the values will be WordPress underscore file and db Underscore data A quick revision off what we have done with this doctor composed file We have used to Key Feels services, and Williams on Held declared volumes, which are used in the services in the services feel we have created to services, database and WordPress. And we have mentioned container feels for both services, which include container name, container image, environment, variables and wall You Mount information. In next demo, we'll execute this composed file and see how the application walks. 46. Demo: Wordpress on Compose: In this demo, we will execute the doctor composed file, which we created in the previous demo. Now, if you are in the present working directory and if your directory consists off only one doctor composed dot Yamil file. All you need to write is doctor hyphen composed up, followed by hyphen d tack off course. The hyphen D tag is optional. And the only command which we're providing is doctor composed that, as you can see, it, is creating objects one by one. And if you notice, even though we didn't provide any network information in our previous demo, first of all, it is creating a default network with the D Fortman truck driver. It will be a bridge network. Then it s creating the volumes WordPress files and DB data from default driver so their scopes will be local. Then it is creating services. If you notice db service is created before WordPress service because workplace is dependent on DB. Now let's have a list off running containers to see if our service has created both the containers. And as you can see, my sequel database on WD underscore front end. Both of the containers are up and running for more than 30 seconds. Now, if you see further in case off stability, underscore front and container. Even the port mapping information is available. Where 8000 port is map to the 80 port, you may wonder, How did this happen? We did not provide any information regarding any network. If you remember when Views Doctor composed up common Docker composed first ofall created a D Fort network, this network was created to make sure all of the network requirements off the preceding services will be fulfilled by it in terms of bridge network, which means that both off these containers are connected to the same default bridge network so they can talk to the outside world and they can talk to each other. No, let's go to our Web browser and see what is being hosted on our local host. As we can see, the local host is hosting the default beach off WordPress installation, which means that WordPress installation and hosting was successful. Now let's play a bit more with this WordPress and see what we can do with it. Well, no, who? We have added a lot of content to a dummy post and now it says that the Post has been published. If we click on view post button, we should be able to view how our post looks. So let's do that. The Post looks neat, tidy and well structured. It means that WordPress installation was not only successful, it is working just smoothly. Now let's work with my sequel. This may not look as exciting and riveted as WordPress webpage, but we're back to our good old terminal. Now let's again get a list of running containers we have already worked with WD Underscore Front End. So now it's time to work with my sequel database Container. Let's turn it the Doctor exactly D Common and win a bash Common on it. We are into the container with root privileges. So let's list out the directories. Let's navigate to wall slash lib slash my sequel directories to see its condom further. And as you can see, the information about WordPress user has already been added to this container, which means that linking off these containers was successful and the information was exchanged successfully as well. Let's on another instance, off my sequel container, but this time as a client, as you can see, we're linking this container with our previous my sequel underscored database container, and we're also providing information about the communication port and root user credentials you may have to connected to afford religion. It work as you can see the client site off. My sequel is now active, and we can see what is being hosted on the my sequel database over when we learned the query. Sure data basis, apart from the system provided or default databases like information Schema, my sequel Performance Key, Mom or SIS itself. We also have the fifth database, all WordPress, which has been derived from the service off WordPress front end. If we go further into this, let's use the quickie use WordPress so that we can dig deeper into that database. No, our database has changed. Let's take a look at the tables inside WordPress database type short tables, semi colons and hit enter. And here we are all the required tables for a successful WordPress instance. Although we don't need to doubt whether this was working properly or not, because WordPress was already established and it was working so smoothly. But this gives us even stronger belief and understanding on how linked services work with Docker. Compose 47. Demo: Introduction to Docker Compose CLI: Now that we're done with Dr Composed Gamel file and its execution, let's switch to Docker compose command line. Our first command in the series off Docker compose commands is doctor composed Conflict. This command is used to view the composed Gamel file on the terminal screen. As you can see, it provides all of the information about both of services and volumes, which we had mentioned in the previous yammer file. We can also extract specific information from the AMEL file like services. The next come on is docker compose images. This command is used to list out all the images used to create containers for services in composed files. As you can see, both the images are available here, which were used in the previous services off Doctor composed yamma file or Next command is doctor composed lobs. As you might have guessed, this command is used to fetch the long output from the service. Since we have a lot of logs, let's narrow them down a bit. Using doctor composed logs, hyphen have been tail equal. Sten the tail flag allows the last 10 logs off both the services to be printed on the STD out or nominal. As you can see, we have last 10 logs off both the services or containers, my sequel and WordPress Just like Dr P s. We helped Dr Composed Bs where we can see both the containers running along with other information such as state, which is up port mapping information and inter point commands. Our next command is doctor Composed Stop, which is used to display all the running processes inside all of the containers. Which means that in both the containers my sequel database on WordPress front and these are the processes which are running each processes have the individual process i d and Barron process i d. The structure off process and parent child relationship depends on the base image used in the creation off these images. And finally we helped Dr Composed down. You can consider it as a cleanup command or contrary command to docker compose up when we hit enter it stops both of the services, removes the containers and removed additional resources like networks. In next model, we will have look at probably the most exhaustive feature off Docker, which is Docker swarm 48. Introduction to Container Orchestration and Docker Swarm: till now we have been revolving around containers on a single host. Single host would generally mean one machine on one VM. They definitely have limited resources, and it is totally fine, as long as your purpose is to solve something not so resource heavy like a static landing page on a block, and one guy would be more than sufficient to manage it as well. But that's not the only application why we use containers. There are giants like Google and PayPal who, how millions off users a day. In their case, the amount of containers would be staggeringly high, and they all may how to communicate in any topology at a given point of time. In fact, you and if we don't focus on such large applications, a dynamic website keeping track off visitors and collecting data from their actions would also need way more containers than usual. Blawg. Let's say you and if we did manage to deploy all of these containers on the same host somehow but we might turn out off Resource is any time are due to that the performance may be affected severely. Plus, if the host goes down, our side is doomed for sure. What should we do then? Well, a simple solution would be deploy them on more than one host and get them managed by more than one develops. Ingenious. Sounds fancy, but they would all be eternally scattered. And to make sure they remain in sync, we may have to run another set off micro services in the back end. Plus, hiring more people for doing the same task would also be less economic and none off the individuals would get opportunities and growth they deserve. So what true then? Well, it seems like we need someone who can make all of the's hosts collaborate and allow us to manage them simultaneously from a single instance. Kindof like cluster. In fact, exactly like a cluster off docket hosts. This way our containers will be in sync. The performance won't be reduced due to resource Cassidy. They can be managed from a single endpoint. We can even think off replicas and backups off our containers for the cases where one or some of our hosts may go down and life will be happy. But who is that? Someone? The container orchestrator is a tool used to provision should do and manage containers at last skill over one or more clusters off multiple hosts, as we have mentioned before. While Docker Ecosystem has many offerings, some off them being less significant than the other ones, it has three major tools which should be learned by every container enthusiast. We have already seen Docker Engine and Dr Composed. The next stop at our journey off learning containers is the orchestrator develop and provided by Docker called Docker Swarm. The idea and implementation a pretty simple. Here we take a set off Docker hosts and connect them using swarm mode. One of thes hosts manually initialize, is the cluster and becomes the manager off the cluster. The manager provides a key which can be used by other North to join the cluster. Once they joined the manager, they become worker nodes. The analogy is pretty self explanatory here we as users communicate with the manager, and the manager communicates with the workers quite like a management hierarchy. Often industry, actually, just like Docker, compose with demand. Our actions inform off a service which manager translates into smaller tasks and provides them to workers to get handled. To do all of this manager is equipped with a set off useful tools such as http AP I endpoint, which makes it capable off serving our service request and creating objects out of those services. Orchestrator, which passes task translated from services to workers. Allocator, which allocates internal clusters i p's to the workers and manager it sells. Dispatcher with decides which node will so which task and gives this information to the orchestrator. And finally she doula. The past provided by orchestrator are idle. They don't run as soon as they get located. She doula signals workers to run the task which they have received. And so it also decides which task well done first and which won't. As for workers, they're pretty simple compared to manager. They have two key components in total Worker, which connects to the dispatcher off the master to check if it has any task to receive from the orchestrator, an executor which literally does what its name suggests. It executes the tasks, which means it creates containers, volumes, networks and runs them. You may have noticed that Dr hasn't been the most creative form, as faras naming the tools is concerned, since Warm is an orchestrator which has a component called orchestrator running on its manager and worker has a component called worker. We can't Jean stays, but we can make sure that we don't get confused by it. So in this course, whenever we refer toa orchestrator and worker, we will indicate orchestrating tool in general and worker Nords. If you want to address the internal components instead, we will call them out specifically, just like every other topic. We also have a bunch of hands on demos for swarm. But to understand how deploying containers on a cluster is different from deploying them on a single host, take this example. Let's say we have a service which needs three replicas off engine X containers hosting the same content once we provide the service to the manager. It divides this into three smaller tasks and locates one task to each worker. So all of the workers would be hosting one instance off engine X Web server container with death said, By now, you might even be wondering about what would happen if Swarm faces failure. In other words, what if one or more nodes go down? You know the answer. Let's get to the next lecture 49. Can Swarm handle failure?: can swarm handle failure. One word answer is yes, it can, but more interesting parties. How? Let's take the previous example off the service, earning three replicas off Engine X, each hosted on one worker or master on. Our workers are healthy and running. What if one of the workers go down? Let's say in this case, Worker three went down. If that happens, the task three will be shield on one of the other workers. Once Worker three is back to its turning straight mastery might get more back to it. And if it's not causing any overload on worker to, it, may just stay there and work. A tree might be ready to host other tasks when they arrive in future. In a nutshell. If one off the North's go down, the other North can handle its load. If the master goes down, though, the workers perform a mutual election where one off the workers gets promoted and the cluster starts working again. The next question would be how many nodes can go down without affecting Swarm well, to make sure that the swamp cluster functions properly at least more than half off, the Nords should be working minimum number off required. Working norms for a happy swarm cluster is equal to the number off daughter Lourdes, divided by two plus one, which again means more than half. 50. Demo: VirtualBox installation: Let's start setting up a doctor swamp cluster but installing ah, hyper wiser on our host machine. If you're wondering what is Ah, hyper visor, it is a piece off software which allows us to create virtual machines. First of all, here is the sources dot list file, and as you can see, there are a lot off links already available and most of them are for updates regarding open toe or other software. We have added the line. Let's save the file. Now let's get the GP geeky for our virtual box left some pseudo a pretty get update command . And as you can see, just beneath sublime text, we can see that virtual box has also been updated. Now the application is added to the list off a pretty package Manager. Let's install it type pseudo epic get install, followed by the version off virtual box. Here, we're going to install Virtual Box 5.2. Once the process is complete, let's see if we can find Virtual Box and our list off Software's And here we are. Oracle Virtual Box has been installed successfully. It is up and running 51. Demo: Docker Machine Installation: Now let's install a tool card doctor machine. It will set up multiple hosts for us, which will act as individual nodes on a swarm cluster. We will install Doctor Machine from its official get report. First of all, we will call it and then we'll install it under user local bin directory. Once the installation is complete, let's verify it by typing doctor machine version. Dr Machine has been installed successfully with version 0.14. 52. Demo: Setting up the Swarm Cluster: Let's create our first node using Docker machine. Create command. We're using virtual box as the diver and we're naming our note manager while the note is being created. You can see the Doctor machine is using a custom toys called Bhutto Docker, and it is using its eyes so image to install it on a virtual machine for the information. Boot to Docker is a minimal Linux OS customized for containers to run smoothly, being lightweight at the same time. Let's see if the North has been created. Used Doctor machine ls Command and Manager has been created. It is earning doctor version 18.6 and it also has its dedicated I P, which is 192.168 dot 99.100. Similarly, we can also create a couple off more nodes named Worker one and worker too. Once we're done with their creation, weaken done, Doctor Machine ls again to see if both are running perfectly. And here they are. Let's stop this Manager North using Docker machine stopped Manager command When we list our notes manager exists, but it is stopped. We can start it again using docker machine start Manager Command If we want to find specific information about a node, we can use Docker Machine I p Manager which provides i p off manager Nord Similarly, we can get eyepiece off worker one and worker to Nord just like every other object in docker ecosystem we can use inspect command with Dr Machine Note as well Use Docker Machine Inspect Command followed by the name off the North which is manager here. As you can see, the Inspector Man provides a lot off information about the manager Nord including its machine name I p address assess its user and port, a sausage key part and some other useful information Finally, let's ssh indoor manager not using Docker Machine Ssh Command followed by name off the node which again here is manager we have navigated to the shell off manager note 53. Demo: Initialising Swarm Cluster: in this demo, we have three terminals, one for each note. First of all, let's have a list off Nords with Dr Machine ls Command. As you can see, we have manager, worker one and worker too. Now let's assess age indoor manager just like we did in the last demo. Since we want to make this manager Nord manager which is quite the little dome for its role , let's initialize our swarm using Docker swarm in it command and advertise its i p address to the other nodes. Once we hit enter, the swarm mood gets initialized and the current Nord, which is managing north, becomes manager. Now if we want to add workers to this manager note we can lend doctors form joint common from respective worker nodes along with the token which is generated from this manager Nord . This token is a unique I d which can be used by other notes to join our manager as a part off its cluster. In case we have lost this command or token, we can get it back by typing Docker Swarm joined token worker But this command will only work if the manager has been initialized with swarm mode. We will use this doctor swarm joint command along with its Tokcan from both worker one and worker to to make sure that both off them joined this cluster as workers while the current road remains manager. As you can see, the command has worked successfully from Worker One note and it has joined the swarm cluster as a worker. Similarly, the command was successful in worker to as well and regard similar confirmation. 54. Demo: Working with Swarm nodes | List and Inspect: now that both the North's have joined the bluster as workers. Let's verify it. Using docker note L s command. Take a note that this is part off. Doctors swarm Command Ling. Once we hit enter, we get all of the three notes along with their host names. All of them have their status as ready and availability as active. And if you can notice, Manager also has the status off being leader. This is applicability when we have a cluster with Morton one manager in which case one off the managers will act as leader. There is no confusion here, since we only have one manager and to worker nodes, So our manager will be leader by default. Now we can inspect our manager and worker north from managers Shell itself. Let's type doctor note inspect followed by self along with pretty flag were mentioning self because manager wants to inspect itself. And as you can see, what we get is the note i d. Its host name joining Timestamp status and some other information like platform resource is Injun version, which is doctor Injun version. Here it is 18.46 point one community edition and some security certificates. We can hit the command for worker one and two as well, and we get respective information about both off them. As you can see, all of the's three notes have different I p. But rest of the things are pretty much the same. Of course, their roles are different, which will be explored in further demos. 55. Demo: Creating a Service on Swarm: Let's use Docker Swarm for the reason it is designed, which is toe how multiple replicas off a container or to you services with multiple container themselves. We'll create a service called Web Server from latest engine X image and help three replicas for it. We have also mentioned port mapping information with hyphen P flag. Once we hit enter, you can see that our service has been divided into three tasks and each task has been carried out individually. Once the tasks are complete, the service has been verified. And once the service creation is complete, we can list it using docker service L s command. First of all, we have service i d. Then we have the name of service, which is seem as we had provided with the command Web server. Then we have moored off service. It is replicated, which means the same image has been replicated more than once, and multiple instances for multiple containers are created from the same image. We helped three replicas in particular. The image which has been used is the latest engine X, and we also have port mapping information for TCP. If we want to have a look at the container sending inside the service. The command is pretty simple, just right. Doctors Office Bs, followed by the name of the service. Here we have three containers, and the naming convention is pretty simple. Their names are Web server 0.1 Web server dot to and Web server 0.0.3. They're up and running for about the same time, and all of them share the common injure next latest image, just like we had done with Dr Composed. Let's inspect our service. And as we go further, along with the genetic information, we also get some additional information, like the moored off the service, which is replicated, and details regarding all of the hosts or all of the machines where each and every container off the service is provisioned. Unlike Dr Service Bs, if we regularly run doctor PS hyphen, a common on any off the Nord, we will get to know that each off the note is only running one container. That is because the service has been deployed across the cluster, which means that the Lord was divided evenly since we had three replicas. All of these containers were should yield on an individual Nord Web server 0.1 was should you'll on manager Note. Web server dot to was should yield on worker one Nord and Web server 10.3 was shed your on worker to note. Just like a regular container. We can inspect this Web server 0.1 as well. Now this means that all of these three notes are learning at least one instance off engine next Web server. So all of them should be serving in the next default webpage on their respective I P addresses on their port 80 80. Let's go to browser and checked this fact with our manager, Nord. See that we're navigating to the I p off the manager, which is $192.168.99.100 and they're mentioning the Port 80 80. It seems like a success. Now let's do the same with worker one and worker to This means that the service is earning successfully and doctors warm is hosting engine next Web server on all of the's three notes 56. Demo: Making a node leave your Swarm: now that we have deployed our engine ex service across the swarm cluster successfully, let's think off some more innovative use cases. For example, What if I want to take down one off my notes for maintainers? Or what if one off my notes actually goes down here? We will test it out. The safe way to make a note leave the cluster is to dream it. We can do it. Which doctor note Update availability Command, followed by the action and the name off the North. Here the command will work like docker Note update availability Flag Dane Worker, too. And what we get is the name off the note as the confirmation that it has drained out. Still, we can verify it by typing doctor note a less, and we can see that the status off worker to note is still ready. But availability is dream, which means that the note is up, but its availability is drain, which means that no containers can be should yield on it. When we drain the note, the container off the task should do on the note gets transferred or re should you'll do one off the other Nords. Let's verify it using Docker service PS Web server. And as you can see, the Web server 0.3 container has been shifted from worker to to manager, and it has been running since 42 seconds, which is about the time when worker to was drained. On the other hand, if we use Docker PS on worker to now, which has been drained, we will see that the container has exited the note and is now in quite dead state. Now let's remove this note from cholesterol whatsoever, and when we try to do that, we get this error from Docker Demon. The reason behind that is the North might be in the brain condition, but it is still up. Doctor is still serving its FBI, so we need to make sure that it leaves the swamp cluster first. Then it gets removed from the Masters list. Let's use Docker Swarm Lou Command from worker to note. Once we do so, we get a pretty clear Nord. That note has left the swarm. If we try to run the same command again on manager note, we'll see that the worker do note has been removed successfully. We can verify it by listing the Nords again and what we will find is our cluster made off just two nodes. Manager and work of one. 57. Demo: Scaling and updating with Swarm: in this demo, we will perform a few more orchestration related tasks if you remember clearly or service. Web Server had three replicas off latest engine X image. Let's skill over service and increase its number off replicas to six. We can do it with Dr Service Skilled Command, followed by the name off service, and the number off replicas. Once warm, has verified the scaling. We can do it as well, using doctor service PS, followed by the service name and, as you can see instead of three. Now we have six containers running on engine X Latest image. Three off them are scheduled on manager and three off them assure you on worker. All of those six are in running state and three off them seem to be quite new, as you might have expected if we don't docker ps hyphen e. On both manager and worker one. We'll see three containers running on each off them, while worker one has to new containers. Manager has one new container. Furthermore, we can even roll out some updates on all of these six containers. As you know, all of the's containers are running on engine. Next latest image. We can change it to engine X Alpine. If you're wondering what is the difference? Well, the latest version off Engine X is built on top off open toe base image, whereas Alpine version is based on top off minimal Alpine Lennox image. Let's use the command Doctor Service update for load By what kind of feel do we want to update? We want to update the image off the service. Once we hit, enter all of the's task. Get updated one at a time. Once the update process is complete, we can verify it with Dr Service Inspect Command and let's make sure that the result off the inspector man it's pretty printed. As you can see, the service more is still replicated. The number off replicas ISS six. If we go to the container specifications instead of showing Engine X latest, it shows engine X Alpine, which means all of the containers are switched from engine X latest toe alpine image. Finally, we can remove or service using docker service RM command followed by the name off the service and as a notification, we get the name of service. Let's type Dr P s hyphen A and as you can see every container is being taken down one by one. If we do the same on worker one note, you will see that all of the containers are taken down and also removed. Let's wait for a while and use the same command on manager as well. Well, now you another manager is empty. Finally, let's clean up or cluster by making sure that worker Nord is also leaving the blaster just like we did with worker too. We'll also make worker one. Leave the cluster voluntarily using Docker Swarm Leave common We're back toe having one doctor host which is manager. 58. What about the more popular one?: Dr Swarm is pretty useful. But whenever we talk about container orchestration, one named dominates the conversation, which is kubernetes, you might wonder. Aren't we already done with orchestration? Well, not yet. Docker, Swarm and Kubernetes both co exist in the market and even darker itself has acknowledged it since it broad support for communities with its speed enterprise version. Plus, as far as my knowledge goes, there is no such thing as managed. Docker swarm on any off the popular public cloud platforms, whereas managed kubernetes is one off the salient features off Google Cloud. That's not all. Azure and AWS are catching up pretty fast, too. These are more than enough reasons to learn communities alongside swarm, but we should know the advantages and challenges off boat. Let's start with the nature. Swarm is a part of doctors ecosystem, so all of its features act as an extension off doctors own capabilities. Where's Kubernetes is an entirely different product managed by C. N. C. F, which stands for Cloud Native Computing Foundation. Since warm belongs to Dr Ecosystem, you didn't face any trouble adapting its terminologies or concepts because most of them were in line with what you could already do with Docker, so some is easier to set up and adapt. Nothing is ever too difficult once you get a hang off it. But setting up and adapting communities introduces more new concepts compared toa swarm so you can definitely college relatively difficult. Plus, Kubernetes introduces a whole new command line where a swamps command line is pretty similar to Dr Cli itself. As far as the utilities go, Docker Swarm dies less deeper into the field off orchestration. Where's kubernetes brides? You a lot more exhaustive orchestration. Functionalities monitoring swarm can be tricky, since it either In walls, third party tools are paid services by Dr Enterprise. Where's Kubernetes Broich's native support for logging and monitoring? Moreover, doctors home doesn't only have less functionalities compared to communities, but it also becomes difficult to manage after having more than 15 hours or so, because you may not have sufficient control over she doing certain containers on certain north, which can be mind boggling to manage. Where's in case of kubernetes? We have a lot more freedom and fault tolerance. The final control allows us to group north as we want, and abstain are containers from being shadowed at certain nodes. In fact, Communities has shown promising performances, even in the case, off more than 1000 nor abs. Due to all of this, Even though doctors home has good community support and featured updates, Kubernetes has huge support and has turned into a complete develops buzzword by all positive means. All in all, it means the greater your application, the more likely you are to use communities rather than swarm off course. Not everyone targets millions off audience and ever scaling clusters for them swarm might be enough. But for you as a learner, the journey must not end before learning exciting aspects off kubernetes. 59. Kubernetes: An origin Story: before we learn kubernetes. Let's take a look at its popular origin story. Long ago, there was a girl search engine called Google. It was initially developed by Mr Page and Mr Brennan during that PhD studies in a not so fancy work area. This infrastructure was minimal and users were limited. But the idea was game changing. So soon it turned into a fancy tech company with greater technical infrastructure and increasing number off users. But that, too, was just the beginning. Google turned out to be one of the biggest tech chance with billions Say it again. Billions off users across the globe do. Google became a noun, Googling became a hobby, and Google stock became one off the prime investments. All of this involved endless efforts, and only by passionate, ingenious and yet turn out to be forest off servers. Back then, there was no doctor, so Google engineers couldn't just go to you, Timmy and take up. Of course, they had to dwell deep into the roots off computing history. Then they came up with the realization that Lennox already had a solution called containers , which could be set up using name spaces and see groups in containers on abstraction at application layer, which packages goods and dependencies together. So they started using them. But they also needed someone who could orchestrate their containers on a large scale for them and that someone was kubernetes. This is how communities came into existence, and rest is a history. 60. Kubernetes: Architecture: from a bird's eye view, the architecture off Kubernetes cluster would look pretty simple. We helped to types off instances Master and Nords, the border and communities, but self different purposes, just like manager and worker off swarm. Let's have a deeper look inside. Master Master acts as a controlling note, and while working with Kubernetes, we communicate with master. For the most part, it runs a set off applications, which include Cube, a PS, over which serves all of the rest requests provided by user and obtains responses from other Nords. You can consider it as a central serving unit, affront and off the cluster. Then we help Cube controller manager with Selves as a parent or managing process for a number of controller processes. These controller processes managed the controller objects like replicas that controller or deployment controller, which will study soon enough. Next, we help Cubes fibula, which she duels over container under a supervisory sandbox environment called part cubes. Abdula also decides which off the nodes will be solving, which set off containers. The A P I requests from communities controller manager and cubes should Ula are sold by Cuba. AP ice over. Finally, we help eight CD, which is the distributed key Value Data store. Eight cities Toast the data obtained from all other components in key value Pass. This may include our cluster configuration input desired state, actual cluster state event logs, object details, anything and everything you see. The only communicates with Cube MPs over for security reasons. So in a nutshell, Cube controller manager controls objects cubes regular should use containers and the AP eyes are sold by Cuba. AP a server, which stores all of their data as a key value pair in 80 and fetches the data from the same place as well. The simple yet robust architecture off master is one of the reasons for thriving success off kubernetes. Now let's talk about nodes. They're pretty simple compared to master the only than two components, to be precise. One is Mr Talk talk, and one is Mr Do do que bullet is Mr Do do as it performs the action suggested by master components like Cube Sevilla AP ice over. Our controller, manager, Master and Nords are virtually are physically different machine, which means Cuban. It acts as a supervisory process on North to allocate resources and create containers or point processes. Que Proxy is Mr Talk Talk. It manages notes, communication with other Nords Master and the world outside the cluster. In Big Lucky's, it is Cuba AP a server off master, which talks to Q proxy off the node. So cue a PS over gets data from eight CD. It gets requests from controller manager and she doula and passes it to the north. Where. Q proxy for rescue proxy. Process it to the cube lit, which in return provides the response to these requests, which again passed to master. Why a proxy and stored on a CD. But if the cluster is hosted on community supported cloud platform Cube controller manager talks to Q proxy while Cloud VPC or other such relevant infrastructure since it has a component called cloud controller manager. Now let's focus on how we as users, interacted. Kubernetes users talk to the master of Ackermann's Let's say we command Master to create an object. Master passes this instruction as an FBI request. Once Cuban, it performs his request. It returns the state off the north as response which master stores in its eight CD and passed to US objects can be off multiple types, such as workloads, conflict objects, connectivity, objects or storage objects. Where estates are off two types desired state and gun state. Kubernetes always keeps on checking if the desire state and current state match if they don't match, Kubernetes tries its best to make sure that they do. And if they do match it keeps on checking again and again to make sure this harmony is it affected. This endless loop is called Reconciliation Loop, which make sure that our cluster is in the most desired state as much as possible. All in all, this is how communities infrastructure functions. Next, we will go through the objects off communities and learn how to use them. And while we do so, you will get a broader sense and deeper idea off how this infrastructure is used while creating and using objects 61. Demo: Bootstrapping Kubernetes Cluster on Google Cloud Platform: open your favorite Web browser and goto this link console dark cloud dot google dot com This is the link for Google Cloud Platform Dashboard or G C P Dashboard. But before we can go there, we need to sign in our Google account and do your I D and password and hit. Next we get a pop up, which asks us to confirm about the dumps and services off JCP and also provide Google or residential detail. I'm putting India. You can put your own country, and then we have a choice, whether we want any feature of days or survey emails from Google or not. Well, since I don't want to receive them, I will click, no click on, agree and continue, and the problem will be gone. What you see in front of your screen is to getting started view off gcb dashboard. We have a bunch off most used products like Compute Engine, which we will be using to create virtual machines. Cloud storage, which is Google's affordable block storage and Cloud sequel, which is managed my sequel or post Greece equal from Google. But before we can use any off piece, we need to set up something called building, which means we need to initiate are free trial off Google Cloud Account by doing which we will receive a $300 credit, which can be spent within a year. Click on Try for Free. It seems like it is a two step payments at a process. Google is explicitly stating that we will get $300 credit for free for starting the style account. And even once the credits are finished, we won't be charged unless we agree to be built. Step one is pretty much similar to what we have done previously to the prompt, which appeared We need Toe agreed to the terms and services off Google, and we need to tell them whether we want email set up or not and click on agree and continue. Step two involves personal information like a town type, which can be either business or individual tax information, which can be registered or unregistered individual building name billing address etcetera. Once you have filled in all of these details and you scroll down, we get to the payment methods. Currently, the available option is monthly automatic payments and to enable them we need to provide credit or debit card details. If you live in a country like India, where Elektronik transactions are protected by one time passwords or three D pins, your debit card will not be accepted and you will how to use a credit card. The bottom line is whichever card you use should help feature off auto payments once you enter your details. Hit on start my free trial Burton and the next Green says that Google is creating a project for us, and this may take a few moments. It seems like our free trial is set up now. We have $300 credit on our Google Cloud Platform billing account, and we can get started by using GCB services. So what do we want to try first? Well, I want to try computing and applications, So let's click on that. These services are these provisions from Google fall under the category off computing services. Now, if you take a look at the left hand side pain, we have multiple options here. Currently we are on getting started tap, but the other tabs are building marketplace AP Eyes and Services support, which would provide consumer and business level support I am an Edmund, which is useful for setting of permissions and rules security, etcetera. Let's click on building. This is the overview page off our billing account, and it says that we help $300 for 22,183 I nr ₹4 remaining in our credit. Also, the 10 Your remaining for credit is 3 65 days on a year because we have just started using JCP. If you see below it, we have a project linked to this billing account, which is my first project in case of Google Cloud Platform Resources Services provision. Etcetera are managed under projects, which means that one G C P account can have multiple projects for multiple purposes. We have a default created project, which is called my first project. JCP has provided it to us, and if you remember, we had previously seen a screen which said, creating your first project Well, it was this project who will be using this project throughout this course? Go to the upper pain off our dashboard and click on the drop down menu off project, which appears right after the Google Cloud platform. Now let's click on home button and let's let my first project as our project once we have selected the project, the view off or dashboard changes. And instead of having a getting started view, we're having our project specific view where the information is divided into multiple cards . The first guard is Project in four, which gives information about the project name, Project I. D and Project Number, which are unique across the globe. And we held The resource is card. Right now, we don't have any resources profession. So it says this project has no resources and we help it be ice hard. The more we used gcb FBI's, the more fluctuations we will see in the graph off this card. Currently, we haven't used much of the FBI's, so the graph is pretty much plain apart from one spike, which might have generated when we activated or free trial. Then we have Google Cloud platform services status, and it says all services are normal. Next up is better card. We have no signs off any errors, which makes sense because we haven't used any resources in the first place. Then we have some miscellaneous cards like news documentation, getting started, etcetera. Let's click on the navigation menu icon off TCP dashboard, which is also called the Hamburger Icon, or three horizontal lines, which are on the top left corner off our Dashboard View Goto Compute Engine section and click on Veum Instances. Since we don't have any William instances created whatsoever, we're getting this response. We have three options. First, to take a quick start door second toe, import some VM for third to create a VM or virtual machine by ourselves. Well, let's create a virtual machine. Now We're guided towards your machine creation page where Google has filled in before data for a standard words your machine, but we'll modify it a bit. That's changed our instance. Name to master, then we have to. Location related choices, which include region and zone region, indicates the overall place where a zone indicates a particular data center within that region. Let's change our region toe issue south one, which redirects to Mumbai, and accordingly, we're choosing issues out one. See you can choose your closest region and zones accordingly. In this course, the choice off region and zone will not matter that much, but if you are making some performance intensive applications where you might required a certain type of resource is like GPU. You may have to choose regions and zones, which provide those resources. Having said that, next up we have machine type. The default value for those is one V CPU, which means one virtual CPU and 3.75 GB off memory. It means that are watching. Machine will have one virtual core off CPU assigned to it, along with 3.75 GB off RAM. Let's increase both of these provisions toe to these abuse. And so in 0.5 g b off memory. Next up, we haven't optional choice to make, but we want to deploy a container image to this William instance or not. Well, we don't want to deploy a container image because we will be doing all of those things by ourselves. Extra is boot disk, which means which operating system will be used on this William. The default is Debian Lennox nine, but we will change it to open to 16.4 We can also choose between SS, deeper assistant desk or standard persistent disk and both of their limits of 65,536 gigabytes. We will stick to stand a persistent disk, but increase the limits to 20 GB Let's head select. We will keep our service account as compute engine default service account, and we will allow full access to all cloud AP eyes. Although we won't be using most of the FBI's. Having access just avoids potential errors. Finally, we have firewall settings where we're going to allow all http and https traffic that's had create button were redirected to William Instances Page and our master instance has been created. If we click on it, we can see the information that we have provided earlier. On top of that, you get another bunch of information such as the CPU platform, which is Intel Sky Leak Creation Time network interface details, firewall details, boot this preferences, etcetera. Let's go back to William Instances Page. If we click on the check box right beside the master instance, we see a few buttons lit up. They respectively allow us to stop if he start or delete the Veum instance, but we won't be doing any off that because we want to keep this instance and want to work on it. In fact, we'll create two more off such William instances, and we will name them No. One and No. Two. It is recommended that you create all of these instances in the same region. There we are. Our two other instances are created. You might be wondering. The instances are created means we EMS are ready. But how do we use them? Well, the simple most option to connect to it would be toe ssh into it. And the moment I said, Assess, Age. Hi. I know your site is stuck on the ssh button, but before we click on that, take a look at the internal and external I p off. All for Williams. Let's connect it. We have multiple options, but we will choose the 1st 1 which is open in a different browser window. There we are. We are connecting to the master. Instance off. Master William. Instance. Off G C P compute engine. The connection seems successful. Let's clear this screen out now. We want to bootstrap a kubernetes cluster on these instances, so let's start by getting root privileges. Run the command sudo su. That's in a standard update using apt get update. Once the update is finished. Let's install Docker using applicant install docker dot io and provide the flag hyphen y for default. Yes, let's check out a four doctor is installed properly. Run docker worsen and it says that we are running Docker 17.3 community addition, which is perfectly fine because that's what we wanted to run. If you're wondering, why are we running Docker? Well, Kubernetes is just a orchestrator. It still needs a container ization platform. So we are installing Doctor. Oops, looks like I closed the window. Well, let's open it again. Now let's install some basic dependencies off kubernetes like https and call the installation seems successful. No, let's get the G, PG or GENIO privacy Guard key for kubernetes and added to the system. We get the response okay, which means that the key was added successfully. We're adding the line the eat B, which stands for Debian, followed by the link, which is http app dot kubernetes dot io slash kubernetes Zaenal mean at the end of our Sosa start list files were doing this so that are a pretty package. Manager can access kubernetes libraries whenever it is performing updates. Let's verify if the step was successful for an app. Get a break again. And as you can see, our last get seven entry includes update received from Kubernetes Ural. No. Let's install all of the components of kubernetes, which include cube, lit Cube, Adam and Cube CTL, where an applicant install que blade Cube Adam cubes ideal accompanied by the flag hyphen y looks like the installation is complete. Let's exit our message and log in again. Consist CTL Command and set our default net bridge I P tables equals one. This is a prerequisite for installing the part network which we will be using while setting up the kubernetes cluster. No, let's initialized our kubernetes cluster using cube. Adam, innit? Command seems like the Cluster initialization is in progress. And once the preflight checks are complete, we're getting a lot of certificates generated. Once the initialization is complete, we had provided a few suggestions. First off all, we have a confirmation that our kubernetes master has been initialized successfully. Next up, we have a bunch of commands which should be used. If we want to use this Lester as regular user and not just root user. I recommend you copy all of these three commands at a safe place because we will be using them later on. Next is a suggested command to deploy apart network on the cluster. But we don't need to copy that. And finally, we have a tube Adam joint Command, followed by the token of incinerated by our master and 64 digits certificate, which we must copy and save at some place because this command is extremely crucial and will be used by all other north to join our master. Once you have copied all of this, let's clear out the terminal. Before we proceed any further, make sure you don't leave any unnecessary white spaces and you copy the command. Now let's turn cubes. Ideal apply command followed by the U R l off our partner network configuration we're using . We've net So the u R L starts with cloud daughter, you don't works, but you can use any part network you like, such as flannel, calico, etcetera and the details for other part networks can be found at kubernetes documentation. It seems like old apartment is set up. Let's check before cluster is working on Cube CDL get pardes, followed by a flag, hyphen, hyphen, all name spaces. You don't need to dig too deep into this command because we will be going through the whole kubernetes command line step by step. All you have to notice are the familiar names such as each CD Cube MPs over you, Controller manager que proxy cubes should ula etcetera? All off thes are components off kubernetes architecture which we have studied in theory. And now they're deployed on your Google Cloud. William instance. Of course, these were the components off a good win. It is master for note instances, we will have different components. Now That's grand regular user access to our master around the three commands one by one which we had copied earlier. And to see if for kubernetes is working on regular user or not. Let's in the same cubes ideology. It pours command again and it seems like all off the parts are up and running, and kubernetes master is accessible from regular user as well. Now let's get back to our ji cpv VM speech on ssh to note one. Let's get the dude user access again Now run Cube. Adam joined command. If you remember, we had done Cube Adam in it from Master and we had received a token from day Now we're using Cube Adam joined from no instances to join the master as the members off cluster The tokens which we are providing are the same that we had received when the master was initialized their son Endo. There we go. Once the joining process is complete, we get a suggestion that we should run cube CDL get notes on Master to see if the known has joined the cluster. Well, we'll do it. But after making no to join the cluster back to D c P v EMS, let's ssh to node to nothing too complicated. Exactly the same steps which we had performed a node one. Get the root user access and run the cube Adam Joint Command with the same tokens. Once that is done, let's follow their suggestion and head back to master. We have already set up a non root user cube CDL access on master so we don't need to run pseudo so again, Just simply run cube CTL get norms And there we go. We have all the three north listed, but if you notice no one is not very yet. Nothing too much to worry about. Let's give it some time. And then the command again. Bingo. All of the north have joined the cluster successfully and are ready to work on. Now that our kubernetes cluster is properly set up, we are ready to explore different aspects of kubernetes like workloads cubes, ideal command line, etcetera. See you in the next lecture. 62. What are Pods?: parts. Till now, I have been avoiding using this term while explaining the architecture as much as possible . But trust me, Kubernetes is all about parts. So what are parts if we keep this doctor set of architecture in mind where containers are on top off, Doctor, This is where community stand right between Dhaka and Containers. But Kubernetes doesn't host containers as they are. It encapsulates them in an environment or object called barred. Ah, part can have one or more containers inside it, but most of you will find one part per container. Bards fall under the category off workload objects. Here are a few things about parts which you should remember. They're the smallest unit off orchestration in kubernetes and everything revolves around them. The only way to interact with containers in communities is through parts, so they're quite absolute. As we mentioned earlier, each part runs at least one container. It can have more than one, but one is a must. And it is also a standard practice. No, this is what makes part special. Kubernetes is designed with the fact in mind that containers die. The failure is natural. And so the restart policy of containers hosted by parts is set toe always by default, just like swamp. Perform orchestration on containers. High level objects off kubernetes perform orchestration on parts. Now, since we know a little bit about Pardes, let's get to work with them. 63. How to operate Kubernetes? Imperative vs Declarative: working with kubernetes is fun because it has two distinctive ways off accepting requests. In other words, there are two ways to manage objects in communities or to work with communities. The ways are imperative and declarative impaired away demands us to bride All sorts of specific information to kubernetes explicitly, for example, create something update something skilled, something. All of the's are specific commands where the action off creation or update is mentioned. Clearly, this means that we have more control over what do we want Cuban? It is to do. But it also means that we have to spend more time and efforts while doing this. On the other hand, declared away, let's kubernetes figure out things on its own by providing a simple file and asking it toe . Apply it. If the objects mentioned in the file don't exist, Kubernetes creates them, and if they do exist, it scales or updated system. Such an approach might sound absurd, but it becomes quite useful for batch processing where we can control multiple objects over single instruction. There are two ways to communicate Imperative Lee through files and took a month. Either Rican pride files with you, Hamel specs or commands with a bunch of flags. The more preferred way is using files, since it eases a troubleshooting later on, as mentioned earlier, there's only one way to communicate decoratively. It is true files here that input can be a file or a whole directory containing a set of file, which makes batch processing faster. In next demo, we'll see how toe work imperative, Lee and Declarative Lee. 64. Demo: Working with Pods: Create, analyse and delete (Imperative and Declarative): Now that we know what a part is and how it works, let's create one by ourselves. We have seen previously that there are two ways to create any object in communities embedded to and declarative. To make sure that we cover both of these ways, we help to terminals open side by side in one terminal will create apart Imperative Lee. Where is on the other terminal? We will create the part declarative Lee. We have these terminals side by side so that we can compare them once both of them are created. Let's start with the imperative one for creating apart imperative. Lee. We need to provide all of the specifications to either a command or a Yemen file. We will jewels yam of this time. Let's write a file called Imperative Barred Document. We're using Nano as our text editor, but you can use any text editor which you want. The basics of the Yemen file remains the same as Docker compose. Only difference would be the fields which would be indicated as key value pass. But that said, let's get started. Our first feel or first key value bad. His FBI version. This field is used to let Cuban it is no which version off a p. I is being used to create this object for more information about the A p A. Worsens and which version to use for which object you can follow the official kubernetes documentation by going to the falling link. Next up we helped kind kind specifies which kind or which type off object is to be created . Using this fight, we want to create a pard so are kind, feel or kind Key will have the value part. Next up, we have meta data. It does what its name suggests. It is data about the object which is going to be created. Typically, moderator would contain feels like names, labels and so on. The primary use off metadata in communities is for us, and the community is itself toe identify group and sort the parts. We want to name the part as impor I m. P. Dash Part and we want to give a label which says, at equals my app. You may have noticed that labels are a key value pair for now. Let's not dwell do deep into labels and let's go further. Next up we have spec field with stands for specifications. You can consider Specht as the most important field off this file. And why is that? Well, the reason is quite obvious. Specs are used to bride object configuration information, which means that here spec ful will provide information and configurations about the part itself, or first spec is containers. Unlike docker, containers, are just a specifications on a field off the parent object, which is part specific, ations me ready with objects, which means that different objects may have different specifications and different feels to provide them. Our next entry under the containers back is the name off the container. It is different from the name off the board. In theory, you can keep both of them same, but keeping them different makes things simpler. Next up we help image field image Feel describes the image which is going to be used to run this container by default. Kubernetes uses images from Docker Hub. But if we want to use other registries, we need to provide specific you are, but we'll get into that later. Next up we have command. This one is quite simple to comprehend. We are asking our container to run Shell Command and ICO, a string called Welcome to Contain, um, masterclass by civilian canvas and sleep for 60 seconds. We shall mention all of the required specifications to create the spot. Let's sail our file and exit the text editor Farrelly. We're also writing another file called declarative part dot Yemen. And as you can see, they're also providing similar feels as previous file in this one, such as epi, a wash in kind and metadata to distinguish this part from previous part. We're giving it a different name, but both off the parts will contain same label. Next up, we have specifications again. The name off the container is changing, but the image is there meaning the same, and this time we're asking it to print the same string, but sleep for 60 more seconds. Let's save this and exit as well. Let's go back to our left hand terminal and write the Command cube. CDL create hyphen f Imperative part, not Yemen. We are asking cubes ideal to create an object from this particular file and not to success off this command. We received the notification off. I am be hyphen part having been created, Let's go back to right hand side terminal. Unlike Imperative Way, we'll write the Command Cube CDL apply and mentioned the file using hyphens, F flag and the parties created as well. In this case, even if we had wanted to delete or scale the part, the command would have been the same kubernetes or cube. CDL would have figured it out by itself. What do we want to convey through the file, whereas in case off Imperative Command, we specifically had to tell communities to create an object. In any case, both or Imparato and declarative ports are created, so let's see if they're running or not. Right cube, CDL get Pardes. We will be using this command a lot in Future Nemo's. It gives a well arranged list off parts, along with a few more attributes like how many of the listed parts are ready? What is the status off each and report whether there was any restarts during the run time off the part? And since how long is the part running? We can see both imperative and declared a part. Having been created No, let's dig deeper into both off the parts by writing the command tube CDL describe Pardes, followed by the name off the part which in this case is I. M. P. Dashboard will also run the same command on the right hand side or big lead to terminal as well. Now we have descriptions off both off the parts so we can make a fair comparison. Let's start from the top. First of all, we have name off, both off the parts, which are unique. Then we can see that both of the parts are allotted to the same name. Space, which is the Fortney in space, are imperative. Part is should yield on No. Two, whereas declared apart. Issued, you'll unknown one. We also have that starting time stamps and their labels, which is common. As for the differences, the imperative part doesn't have any an additions. Where is the declarative part? Has quite a few off them. The reason behind that is Cube City L has used the configuration which were provided by us to create the part in case off imperative part where is in case of declared apart, it has used a specified point template and has just filled in our replaced the information which we have provided, moving further we have I p for both off the parts, but we'll get into that later. Next up, we help container information. As you can see, both of the container have different names and different container ID's, but the container image and the image ideas are seen. We also have the command, which is going to be executed by both of the containers, and it has the slight difference. As we had mentioned moving further, we have the state off the container, which is running in both off. The case is, and we also have the starting time stamp off the container, which means that this is the point where the container went from created state to start at ST on first container, which is imperative, or I am Beacon. Dana has already exited or terminated because it was completed, whereas seem is not the case with the other one, because the sleeping period was a bit longer. Next up we help amount and volume information, but we don't need to dwell so deep into that right now. We will look into them when we study volumes for kubernetes. My personal favorite part about the description off the containers is the events. This is different from how we used to inspect our containers. Using Docker Kubernetes gives us a short and sweet, summery off events which were really important. We can see that both off the containers went through a bunch off events, including there, she Dooling, pulling an image, containers having been created and finally started. So this is how we can create and distinguish imperator and declarative parts. 65. Life-cycle of a Pod: just like congeners parts have their life cycles as well. First of all, part is in the pending state, it means it's confrontations are approved by cube controller, manager and MPs over, but it is yet to be should yield on a node. Once it gets green signal from Cuba lit and distribute, it is in the running state. It means at least one off the parts container is definitely running. Sometimes the containers are programmed to exit after performing a certain task. In such case, the part goes to succeeded state where all off its containers have exited successfully. Or you can say gracefully if one or more containers failed in between, our container dies due to being out of memory. What goes toe failed state from running state. It can be the shuttle after troubleshooting, and that gives it goes back depending and then running state. Lastly, we helped unknown state where the part is not running, but the reason for it is not determined yet, and this is the life cycle off the part 66. Demo: Managing Pod's lifespan with Life-cycle Handlers: Kubernetes provides container lifecycle hooks to trigger commands on container life cycle events. If we recall, Container Lifecycle had five stages created, running, paused, stopped and deleted out of these five cubes. Ideal provides lifecycle hooks for two off the states which are created and stopped. Let's explore both off these using life cycle part, not jahmal file. This is a standard engine export named Life Steve. I see part and under the container spect We helped to life cycle hooks called Post Start and pre stop these hooks. Functionality is pretty much as their names suggest. Both off them have handlers attached to them, which are executable commands. We'll start hooks. Handler. Will ICO welcome toe a file called poor Start MSG and it will trigger after the convenor will enter the created state. This is the state where resources for read right lier result, but the container is not running yet. In other words, the latest CMD our entry point instruction is yet to be executed. The hook works in currently with the parts container creation process, which means if for some reason the handler off the hook hangs, are feels to execute, the part will remain in container created state and won't go into the running state to brief things up. First of all, the container will be created in the post art hook will be handled and the message will be printed. And then the container will start running by executing CMD or Entry Point Command. A general use off this hook is for better debugging, just like try and catch clause in the programming. But it also brings the burden off stalling the container if the hook doesn't get handled properly. So if part events and logs are sufficient for your debugging, you might want to skip using this hook. Lastly, we helped free Stop Hook, which triggers before termination off the container were simply quitting the engine X process before terminating the container. But if you want to strongly very fight this hook, you can apply signal to one of the containers crucial processes, and you would find the container exited with a respective record. Let's exit the file and create the board 30 seconds down, and the part is ready. I know we have sold the benefits off containers a lot, but it is always amusing to see such a level off managed isolation being created with such less efforts and within such a short time. Now let's execute the part with cubes. Ideal exact Amman and run Bash on it. Get the file Post Art Nemazee and Bingo. The hook was executed successfully. The message is loud and clear. Well, not that loud, but it is quite clear. In the next lecture we will see how to replace Convene a CMD command. 67. Demo: Adding Container's Command and Arguments to Pods: Let's start this demo by pending a list off available parts. Using cubes it, you'll get parts. We only have one part lifecycle part, which is from the previous demo, because we have deleted the imperative and declared a parts. Don't worry, we will go through how to relieve parts as well. But for now, let's go to the file command pard Darty Amel. The yamma file looks pretty similar compared toa previous two demos as well. So let's focus on the changes here. First of all, the names off the part and container have changed. The part is named CMD hyphen part, and the container is named CMD hyphen. Container makes sense. Then in spec field after name and image off the container we have Common Field Command field indicates the entry point command in the docker image. If we do not provide any command or value to the command field, Kubernetes uses default entry point off Docker image, but we can change it by providing command and its arguments. Instead of keeping the container up by running a loop off bash Common. We're just asking it to print a couple off environment variables, so the command is sprint envy and its arguments are host. Name and kubernetes underscore poor. You may notice that the command and arguments are written in between double inverted commas and their encapsulated by square brackets. The arguments are separated by a comma. Let's exit the file and make apart. Then cubes. It'll create hyphen f Come on, hyphen part, not Yemen. The part should have been created. Let's test it with cubes. Idiom. Get parts. Here we go. But check it out. This part is not in running state. It is in the completed state. The reason is we have not provided any endless loop command like Bash. We had just asked it to print a couple of environment variables, which it did successfully within a few milliseconds. Maybe so. By the time we run the Command Cube City will get parts. The container had already finished its task, and the part was in completed state. Let's have a description off this part using cubes. Ideal. Describe part CMD part here is our long, well structured description. I'm pretty sure you can comprehend most of the parts easily, so let's directly jump to the command and arguments section. The command is the same what we had provided, which is sprint envy and its arguments are host name and kubernetes sport. Now, if we jump toe events, we can also see that the container had started 35 seconds ago, whereas it finished 34 seconds ago. So within one second all of the commands were performed. We can also verify this by looking at the log off the part simply right cubes, ideologues. And then the pardoning, which is CMD hyphen part. And there you go. We have our host name on Kubernetes Sport both printed. 68. Demo: Configuring Container's Environment Variables with Pods: Hello, everyone. As usual, let's start this. The move with a list off available parts we helped to parts CMD and Life Seaway seaports. One of them is completed and the other one is still running. Now let's open the yamma file environment hyphen, part Gargamel with nano. Again, the family is pretty similar to previous demos, so we should focus on the changes The names off part and container R E n v hyphen part and e n'dri hyphen container. Just like our usual naming convention. If you take a look at the image, we have not simply provided a name with label. We held the whole part on you R l off the image. We have done this because this time we don't want to use doctors Image Registry. We want to use Google Container Registry, which is another place to find container images. In this demo, we're using one off the sample Google images called node. Hello. This note hello is more or less like hello world off Docker image industry and this one is built on top off Alpine base image. With that said, let's get to the cream off this demo, which is E n V. R E N V Field, which is used to provide environment variables to the container. If the container does not have the environment variables provided within this field, it adds it along with its default environment variables. And if unenlightened mint valuable with the same name has already been set up by the docker image, the running container replaces it by the values we provide. So take this example. Let's say we have a docker image and we provided environment variables E B and C equals B Q and R respectively. If we're providing the same environment variables with different values using kubernetes Yamil file just like Docker, only the running container will reflect the changed value, which means that the values will reflect on a copy off the image and the original image will remain unchanged. So the original images environment variables would still be ABC equals becue and are but a copy off. It will have it as a B C equals S de and you or anything else which we provide. In this case, we're providing to environment, valuables, part greeting and part favorable, and their values are suitable to the name as well. Bar greeting his welcome and part farewell. ISS. We don't want you to go with his hat. Smiley. Who that said, Let's save and exit off I'll it's create the part. Using cubes Ideal create hyphen f common. Let's see if it is running or not. The part seems to be running. Now. Let's get a description off this part using cube CDL describe pard, followed by its name, E N V hyphen. Part here is the description off this part. Let's straightaway jump to the environment section and we help to entries in this field board greeting and part farewell exactly the ones which we have set up. Let's clear it and execute this part using cube CTL exact hyphen I t followed by part name , followed by the command, which we want to run. You may notice that disk a man is pretty similar to what doctor has provided for executing a container as well. Now we're in root directory for container. Let's bring our environment variables and there we go. We have a long list off environment variables. This answers more than one questions. First of all, what about the environment variables that we had set up well here they are both poured. Greeting and port farewell are present. And second, when we executed the container, why did we get to root at envy? Part not envy container? Well, the reason is we're still in the root directory off the container itself. But the host name is E N V. Part, which you can see in this environment. Variable. With that out of the way, let's exit this container and get back to our dominance. 69. Labels, Selectors and Namespaces: This might be the beginning when you start feeling that Kubernetes digs deeper into orchestration compared to swarm, let's say we have four parts named Think light pink, dark blue light and blue dark. We can label them to provide a logical grouping off parts here. Both light and dark pink parts are labeled B for pink, and the rest on labeled B for blue label is attack. It is a meta data, which allows us to group or part logically for efficient sorting. Labels are also available with doctor, but they're pretty much useless if we can't do much with them to complete the functionalities off Labour's. We have selectors. We can use electors to search for parts with one or more particular livers. Here we want bars with label P. So all we get a two off the pink parts. We can play around with labels and selectors for all sorts of things. You can also, how more elaborate labels and selectors to pick a particular part, like lightning. No, you may wonder we can have two parts with same labels, but can we help to parts with same name? Straight up answer is no. But there's a catch we can help to different name spaces, Just like programming named Species in Communities is also a way to isolate Pardes. Logically or willingly. It means we can have two parts with the same name in two different name spaces in next Demo will flee with labels and selectors. 70. Demo: Working with Namespaces: name species are a logical partition mechanism off communities, which allows its cluster to be used by multiple users, themes off users or a single user with multiple application without any worries or concerns off undesired interactions. Each user team off user on application may exist within its name space, isolated from every other user off the cluster and operating as if it was the sole user off their cluster. With that out of the way, let's see how many names spaces do we have on our cluster? It seems that we helped three names basis at this point off time. Mind Well, none of these names spaces are created by us. These are the name space is provided by kubernetes, and if you look at their age, all of them are up for 80 minutes. It is about the time when we first bootstrapped or cluster. We have default Q. Public on Cube system named Spaces. The fort, as its name suggests, is that the fort name space for every part that we create within Kubernetes Cube system is used by communities itself. So isolate its parts from default. R Q. Public name spaces that site one of our most standard commands just cubes. It'll get parts and we get what we had expected. Three parts, which we had done in previous demos. Now let's add a twist to it. Provide a flag called all name spaces and see if we get any more parts. And we have a long list off parts. Which means that for all this time, Kubernetes was not just turning 12 or three, but it wasn't all of the's parts. First, let's see the parts within the default name space. They are the same, which we had created CMD envy and life. See, I see parts, which means that regional parts we had created fell straight into the 14 space, and all the other parts are in cube system name space. These parts are implementations off different blocks off kubernetes architecture. If you remember, we have already studied eight CD Cube AP a Server Cube controller manager Cubes Should Ula and Q Proxy. We had also installed Real Net, which is the part network for Kubernetes Cluster, and all of these bards are running under cube system name space, so they're isolated from whatever we are doing on our default name space. Let's create a new name, Space with Cube. CTL. Create name space command, followed by the name off the name space which we want to create, which in this case, is my hyphen name space. I know names, pieces created. No. Let's create the same imperative part which we had created in our first demo, but this time put it in my name space instead of default with hyphen and flak. Let's get our parts. As you can see the list off parts under default, name space is still unchanged. We're having the same old three parts on Imperative Part is nowhere visible. Let's get the part from my name space and there we go. We have our imperative part running for almost 20 seconds, and we can always verify it by listing out parts off all of the names spaces, check out the last entry. It is imperative part 71. Demo: Pod Resource management: When you specify apart, you can optionally specify home at CPU and memory or ram each container needs. When containers held resource requests specified the shed. Ula can make better decisions about which notes to place parts on, and then containers have their limits. Specified. Containers can make sure that the notes don't crash. Let's start out with getting a list off parts. Let's open the file, the source hyphen part, not Gammel. And there we go. The file seems larger than previous party Emma's that we have used, but don't worry. Instead, off one, we help to contain us. This time one is my SQL database Can Dana, whereas the other is front and WordPress container. The party's name is front and first of all, has to go through the obvious things like name off the containers, images being used, environment variables set up and metadata off the bod. Once all of those are out of the way, we have resource is feel in both off the containers. This field is used to provide limits off resource poor container and request per container resources are memory and CPU. As you can see, we have provided pretty little amount of resources to both off the containers, where resource limit is 1 28 megabytes on request limit is just 64 megabytes. Let's see what happens when we try to create such a part. Let's save an exact this file. As usual, Run Cube CDL create hyphen F common, followed by the fine name on the part is created. Let's list the parts out. It seems like the part is still in the container creation state. Let's give it a bit off time. Well, it seems like the containers are still being created. Or, in other words, they have not been created yet. Why is that? Let's take a look at the description a bit. All right, so the part is not ready because the containers are still being created. As you can see, our part is following resource limitations quite strictly. Let's list the parts again. Come on. Only one out of two containers is ready, and the part is in Crash loop. Back off status. Let's see. What's the problem here when we Don Cube Citadel described command again, we can clearly see that the state off database container is terminated, and the reason for that is or M guilt, which stands for out, off memory killed troubleshooting. This isn't much difficult. It clearly suggests that the resource location limits that we have provided are just not sufficient enough for this container to run. But it's on the other hand, the WordPress container is running properly. Even when we look at the events, all of the Aaron's regarding WordPress containers seems to help on well. But in case off my sequel database container, the image was pulled successfully. But the container could not start because the resource is were just not enough. And if you notice both of these containers a shield on the same note because they are in the same part. So when we're learning more than one containers in apart, they will be shield on the same note. But that's not distract from our main objective. We need to figure out a way to make sure that what off these containers are running smoothly in the spot. For now, let's delete our front and part using cube CTL Delete Parts Command, followed by the name off the part. There can be one or more parts that we want to delete, but in this case, we just want to relate front end, and it seems to be believed it. Let's get back to the gamma file off forefront and part and increase the resource limits for our containers instead of 128 MB. We're changing it to one gigabyte. And while we're at it, let's do the same with WordPress Container as well. That sealed the file and exit Nano, and let's try to create the part again. And when we list the board Mullah it didn't even take 11 seconds and are part along with both of its containers is in running state. When we describe it using Cube City and describe, we can clearly see that the resource limits have changed. All of the evidence regarding both of the containers off our part went smoothly. 72. Kubernetes Controllers | Concept and Types: controllers are a type off workload objects just like parts. The controller acts as a parent or supervisory object toe apart, and it manages parts behaviour in certain ways. How controller will deal with the part depends on which controller it is, for example, or replicas that will create multiple replicas off a running part. A deployment controller may perform replication updates off service exposure on parts state . Full sets will arrange the order off execution for parts and will make sure that none off the parts break the cube. Whereas jobs will create parts that will terminate after execution in the next, lectures will work with different controllers and understand them. 73. Introduction to Replicasets: Let's understand control objects. One by one, we will start with replica sets, replica sets on higher unit off orchestration compared to parts, which means they will supervise the parts. The purpose is pretty obvious. As mentioned earlier. This gale parts are they managed the number off replicas off apart. We can increase or decrease the number of replicas off. Apart. Using replica set parts are given labels and replicas. It's are given selectors to keep track off which parts to supervise. It is also possible to provide part definition along with replica sets. It would mean that creation off those parts will also be managed by replica set. If you do so, you need to provide part specs as part template to the Yemen file off Replica set. While they're quite useful, standard practice doesn't involve using replica sets directly there used under supervision off deployment, which we will learn soon enough. 74. Demo: Working with Replicasets: Let's start out as usual by getting a list off parts. These are the parts from our previous section. Since we don't need any off them right now, let's delete all of them and we're back to square one. Let's open our file. Replica hyphen, part Dottie Amel Using Nano. This is a Yemen file off replica sets. Let's pass it one by one. First off, all we have FDA worsen. And if you notice or epi, a version is different from what we used to use with parts. Parts usedto have epi. A version we won where his replica sets are using FDA wasn app slash re one. Next, we help kind. Obviously, since we're creating replica sets, our object kind is, Replica said. Next is metadata. We have name and labels were naming or replica set as replicas that hyphen guestbook and labels are apt guestbook and tire Front end thes labels apply to the replica set itself. It does not mean that the parts created under the Sablikova said will carry on the same label. Next up, we have spec field, just like apart Zamel file. Even in case off Replica said Speck is the most important field. Our first spec is replicas or number off replicas, which in this case is three, which means that this replica said, will create three parts. If you provide five, it will create five parts and if you provide 50 it will create 50 parts. If you're nodes, have enough resource is next up. We have selectors. Selectors are mechanisms used by Replica said to determine which parts will fall under the separate asset. We have two ways to provide the selectors, which is match labels or match expressions. Under match labels, we have provided a key value tire front end, which means that every part having the label tired equals front end will directly fall under the replica set roided that they're under the same name space and are a match expression Selector says that the parts having the key Tyr and its value being fronton will fall into the CEP. LICA said. Essentially both of these selectors are doing same thing in this Yamil, but we have just written them out so that you can know that terror two ways to mention your selectors. Next up we helped template. This template is a parts template, just like we have discussed earlier in the theory will provide data about the parts, which will be created under discipline cassette. Our replica said. We'll use this template to create the number off parts that are mentioned under the replica aspect. Let's start with mental later off parts. We have not provided name on names here, so Replica set will title the parts by itself. But we do have labels, and they're quite essential. And the reason is thes labels will make sure that the parts match the condition off the replica sets selector. Next up, we have part specifications where we are straight up mentioning the containers. The container name will be PHP readers, and the image will be guest book front end version three from Google's container repository . We have also mentioned the ports information, which means that if we exposed these containers containers Port 80 will be mapped with hosts. Ex wives that port. Now let's save and exit this file. It's time to create our replica set using cube CTL create hyphen F command. We're creating or replica set Imperative Lee, but you can create it, declare Italy as well. Now let's see how many parts do we have using cubes Ideal get parts. And there you go. We created three parts simultaneously using a replica set. The part names are given automatically by communities, and all of them were created 6.5 minutes ago. Now let's check out the description off one off the parts to see if the's parts are different from the parts we had created using individual Yamil files, starting with name and name space. We don't have many differences, apart from the fact that we have a new field this time called Controlled by this means that these parts have a parent object which is controlling them. And in this case, this object is replica set guestbook, which we have just created. Apart from that, most of the description is similar to a regular part, just like porn. We can also list out our replica sets using Cube see deal get RS RS is the abbreviation off replica sets, and there we go. We have one replica set, and it says that this replica said has three desired parts, and it is having three currently ready parts, which means that the replica set is working just perfectly. Now let's check out the description off replica set guestbook using cube CTL Paris replica said guestbook. Till now we have only used cubes. It'll describe command with parts, but now it gives a general format off this command. So Gipsy Deal describe is followed by the type off the object that we want to describe and followed by the name off the object, which in this case is a replica, said guestbook. The description is shorter compared to part. We held generate information like metadata, part status board description and three events where each of the event indicates creation off one off the three replica set parts. In fact, there is another aspect to replica set. Let's try to delete one off the three parts that we have here using cube CDL delete parts, followed by the name off this part and the bodies deleted. If we tried to find a list off parts, now, will it help two parts or three parts? Let's check out Well, it has three parts, the one which we had deleted. It's gone forever, but are replicas had has spun up another part with a new name but same configuration, and you can see that the newest part is earning for 10 seconds. It means that even if the parts which are under this replica said Die crash are are deleted replicas, it will just spin up new parts by itself, which seems us a lot off efforts. 75. Introduction to Deployments: deployments stand even higher than replica sets in terms off supervisor In nature, it means deployments are capable off creating their own replica sets, which in turn will create the parts accordingly. Deployments are kind of all round the objects, which can be used for a lot of things, like creating parts, managing replicas, rolling updates on parts, exposing parts, etcetera, just like replica sets. They also use labels and selectors for port identification. By now, you may have started realizing that labels are a lot more than mere moderator for parts. All of these aspects make deployments a perfect choice for hosting stateless application, where order off part creation may not be that much crucial. And as mentioned multiple times, they're most widely used container orchestration objects in Next Demo will be working with deployments. 76. Demo: Working with Deployments: to avoid any confusion. Let's start with the list off the parts by Running Cube. CDL Get parts we help three parts from a previous replica set. Let them be where they are and let's open our deployment. Dottie Amel file. Let's start from the top. Just like replica sets. Deployments also use epi, a version ab slash we've on their kind. Object type is obviously deployment. We have given it the name off reply hyphen, Engine X. Let's go to the Specs field were using match label strategy as selectors, and we will be looking for parts with label. App equals Engine X. Deployments are higher level orchestration objects compared to replica sets. So if we are creating a deployment, a deployment itself is capable off, creating the replica set it needs. By providing replicas field. We can instruct the resultant replica to create a certain number of parts. No, let's go to the party template and fill out the data. We will provide the labels app equals engine X tow. Avoid any conflicts and we will provide the container information which include name, which is deployed container and container image, which is inter next, 1.7 point nine, we're back to using images from doctors registry because, well, they are just simple, too. Right after mentioning the board, let's save and exit this file as usual. That's right. Cubes ideal. Create hyphen air, followed by the name off The file, which is deployment dot mammal on no deployment is created. Let's have a list of parts again, and we helped to new parts here. The top two parts on engine X parts created by deployment deployed hyphen Engine X First off fall apart from the label which we have provided, which is apt equals engine X. The part contains another label, which is for the part template it is using. This label has been provided by kubernetes itself. Next up, we helped controlled by. As you can see, this part is not directly controlled by a deployment. It is controlled by a replica set, which is controlled by deployment. Next, we help container details, including image, name, image, I D and status, which is ready. We also have the normal events off, images being pulled and the container being created etcetera. Let's clear out or terminal and describe or deployment. The deployment description provides a lot of details, starting with the obvious ones like name, name, spaces and labels. We have description about replica set, which indicates that this replica set is supposed to keep two parts up and running. Below that, we have strategy type. You might be wondering what kind of strategy well we're talking about updates strategy off the deployment. One of the most well known use cases off Deployments is without taking it down. He had the strategy type is rolling update. If you're wondering what ruling upgrade strategy means, just go a couple of steps below and we held rolling updates. Strategy D days it says 25% max available and 25% max Search. It means that when this deployment is being updated, only 25% off, its total parts can be unavailable, and the cluster is only allowed to deploy 25% off extra parts while a bleeding the deployment Take this example. It's the deployment has four parts, and we're tryingto update it 10 25% max. Availability means the deployment needs to keep at least three parts off and running all the time, and 25% max. Surge means that the deployment can only create upto five parts at max. Going below. We have details about part template, which is quite common. But if you go even below that, we have the name off the replica set which has been created under this deployment. And if you look at the events, only one event is directly linked to the deployment, which is scaling up, the replica said Toe rest. All off, the events regarding part and containers are handled by either replica sets, which worked under the deployment or parts which are going by replica sets. 77. Introduction to Jobs: moving on from deployments. We have jobs. You might have guessed that they're also higher level units than parts. Well, because almost every controller is higher than parts to define them. Simply jobs mean parts whose containers won't run for eternity once the purpose is fulfilled. The exit and more technical terms the commands provided to the containers are time and integration limited. Once they get executed, container gracefully stops and gives the resource is back to the host. If you list the parts which are maintained by jobs, them not being under running state will not be much of a big issue. They will remain in completed state once the container's exit, and it is totally fine. Jobs are used for batch or parallel processing. Grand jobs, which are periodic repetitive jobs, are used for checks or should yield reiterations off a certain task. Tasks like checking databases are pending. Updated soccer school every five minutes, etcetera, and next demo will be working with jobs 78. Demo: Working with Jobs: we have five parts from previous two lectures. Let's open jobs dot Yamil file Using Nano, as we have seen in the theory, jobs are run to completion type off orchestration objects, which means that the command that we provide under the containers back won't just be an endless loop command Going from the top, we have a different AP. A version compared toa replica sets and deployment, which is batch slash we've on our object kind is job name off. The job is a job bike we hear named it in a way, because this job is going to print the value off by with 2000 decimal points going further , we help part them blades where under the spec field containers. Details include a name off the container, which is job can Dina image, which is doctor registries, full image and command. In this command. We're running a full script toe print. The value off by in 2000 decimal points and another aspect or another specification Off the job is its back off limit. Since job is a rental completion type of organism, we can't have it lingering around forever. This job will try to make sure that the command off this container works, but if it doesn't for some reason, it's the container fails. 10 Job will try food. Repetitive attempts off running the container After four attempts. If the container is not running, the job will back off and it will feed. Having said that, that's even exit the file. Let's create the job using cubes. Ideal Create hyphen F, and our job is created. Let's verify it by getting a list of parts. And there we have our job bipod, which is eight seconds old. If we describe the container, we can see that it is controlled by the job called Job by on its status is succeeded, which is different from the other parts that we have seen recently. Going further, we can also get a list off jobs, and the job can also be described using Cubes et al. Described jobs. Job bite, just like regular orchestration objects or job also has description fields like name, name, space selectors label. It also has starting times, time and completion times, time and the duration for with the job was running. Finally, we held the parts status where one part is succeeded, which was over desire State and zero have failed. It has only one event off successful creation off the part since we had used the command toe. Bring the value off by Let's see if the output is available using logs off the part created by the job run the Command Cube Cdn logs followed by the pardoning and there we go try to memorize this value. 79. Introduction to Services and Service Types: all right. Huge disclaimer. Since all of you have already studied Docker swarm service will be a term which are already family of it. But both off thes services are different in case of swarm service acted like a deployment where you can declare all of the desired objects and manageable convert them into tasks. But here in communities, services are merely networking objects for parts, since both of the forms Joe's to have different interpretation for the same term, it becomes our job not to get confused around it, but that out of the way, let's dig deep into community services. Firstly, services are also objects just like parts or controllers, but they fall under the category of connectivity to understand how services work. Let's stick to dummy parts blue, dark and pink dark. We want this parts to be able to talk Dr External Word or simply to each other. Then services are connectivity objects, which so as a stack off network configurations, which can allow parts to be able to communicate just like deployments or replica sets. Services also use labels and selectors to determine which parts will be connected to them. Our parts how label DB NDP was. Our service has a selector looking for D. B. So dark pink won't affiliate with the service. Dark blues connectivity will now be handled by this service, and it can potentially talk to the outside world as well. Remember the word potentially talk? Not necessarily. Let's brief up. The services forced to points are already familiar to you, but they're important to list out. You might be surprised that Kubernetes itself uses thes services to perform all sorts off in cluster and global communications. So this is generally pride cluster I pito each part which allows it to talk within cluster . But if we choose to abstain from such a practice, we can create a headless service. And finally, Kubernetes also provides native support for multiple services and cloud load balancers. We recently mentioned that services can make parts potentially talk to outside world, but why potentially well with the service is also how types first off them is cluster I P, which only exposes the service within the cluster. It means that outside world can access it, but parts within the cluster connected to the service can talk to each other. Second type is Northport, which exposes the service on the external eyepiece off all of the Nords off the cluster, including master. This will by default also create a cluster I p where the North port will be doubted eventually if we have to find a cloud roided load balancer, We can use load balancer type service which doesn't only expose it all notes, but also prides dedicated external eyepiece to the parts connected to the service. And finally we have external EEM which allows us to use DNS address to communicate to parts connected to the service. In next lecture will take a look at class type E and Northport services, whereas we'll visit load balancers when we don't communities on a managed cloud provision. 80. Demo: Working with ClusterIP services: after performing the cascaded dilation in last section or cluster seems to be pretty neat and clean. We have no parts, no replica sets, no deployments are no jobs lingering around with that said Let's open the file. Deploy engine Next rt Amell This is a regular Gamel file for a deployment called Deploy Hyphen Engine X, which is going to run a couple of containers with Inger next image. Let's not go too deep into that because I'm pretty sure you already understand it by now and exit it now It's open. So hyphen engine extort. Jahmal, this is something new. This is the Yemen file off a community service as always, Starting from the top, we help FBI version just like replica set for pard were using a PC version We won. The object kind is service, so his name is service hyphen in genetics and its label is run equals my engine. ICS going forward with the specs off the service. We have ports information. The port information suggests that containers port 80 is supposed to be exposed using this service. And finally we have selector, as we have seen in the theory, the service will use the selector toe identify which bards to expose. And here the selector is run equals my engine X, which also happens to be the label off the parts being created by our deployment. Let's save an exit this file. Let's create both our deployment and service or deployment is ready in both or parts are up and running. Now let's get a list off our services. We helped to services lying around here. One is created by kubernetes itself and other itself. Engine X, which is created by us almost 25 seconds ago. If you notice the type off boat off the services is Lester I. P. And if you remember, Cluster I P allows containers to talk within the cluster, which means that engine X country nous off. Also engine Next deployment are exposed within the cluster on Port 80 and we're accessing community services within the cluster using Port 4 43 Let's describe over service using cubes et al describe s we see, which is the abbreviation off services. So Engine X, which is the name off the service, the description is pretty short. We have basic information like name, names, based label annotation and selector, which is run equals my engine X. Then we helped a type off the service, which is bluster i p. Next we help Target Board, which is 80 on TCP protocols, and we also have endpoints for both of our containers. If you remember, Dr Sessions and points are the mechanism to enable communication with docker containers. We have said that our containers are accessible within the cluster on our containers are exposed within the cluster, which means that home page off engine next Web server should be hosted on these eye peas, but the scope should be limited to our cluster. Well, let's tried my running girl Command for load by http I p off our service, followed by a Kahlan and the port, which is 80. And there we go. This is two html format off engine. Next Web servers welcome home page, which means that our service is up and running 81. Demo: Working with NodePort Services: We don't need to create a separate deployment for this demo. We'll just use the one we have created in previous demo. Let's listed out once or