Kubernetes and Docker: The Container Masterclass | Cerulean Canvas | Skillshare

Playback Speed


1.0x


  • 0.5x
  • 0.75x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 1.75x
  • 2x

Kubernetes and Docker: The Container Masterclass

teacher avatar Cerulean Canvas, Learn, Express, Paint your dreams!

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

    • 1.

      CMC Promo

      3:31

    • 2.

      Course Outline

      1:53

    • 3.

      How to make a web application?

      4:21

    • 4.

      Demo: Simple Web Application

      2:28

    • 5.

      A forest of VMs!

      2:08

    • 6.

      Hello Containers!

      5:08

    • 7.

      Hello Docker!

      1:34

    • 8.

      Demo: Installing Docker on Linux

      3:45

    • 9.

      Demo: Containerizing Simple Web Application

      2:25

    • 10.

      Stages of Containerization

      0:53

    • 11.

      How does Docker Work?

      3:51

    • 12.

      A quick look at the format of Dockerfile

      2:25

    • 13.

      Demo: Fundamental Instructions of Dockerfile

      5:48

    • 14.

      Demo: Configuration Instructions of Dockerfile

      5:29

    • 15.

      Demo: Execution Instructions of Dockerfile

      4:31

    • 16.

      Demo: Expose Instructions of Dockerfile

      4:15

    • 17.

      Demo: Miscellaneous Instructions of Dockerfile (Part 1)

      4:07

    • 18.

      Demo: Miscellaneous Instructions of Dockerfile (Part 2)

      9:26

    • 19.

      Demo: Docker Hub Walk-through

      4:06

    • 20.

      Understanding Docker Images

      3:01

    • 21.

      Demo: Working with Docker Images | Search, List, Push, Pull and Tag

      11:37

    • 22.

      Demo: Know your Docker Image | Inspect and History

      5:31

    • 23.

      Demo: Clean up Docker Images

      1:48

    • 24.

      A Container is born!

      1:52

    • 25.

      Container Life-cycle

      2:54

    • 26.

      Demo: Container Run Vs Create

      2:52

    • 27.

      Demo: Working with Containers | Start, Stop, Restart and Rename

      2:58

    • 28.

      Demo: Working with Containers | Attach and Exec

      1:44

    • 29.

      Demo: Inspect and Commit Container

      3:29

    • 30.

      Demo: Container Exposure | Container Port-mapping

      1:52

    • 31.

      Demo: Container clean-up | Prune and Remove

      2:01

    • 32.

      Multi-container Applications and Introduction to Networking in Docker

      2:41

    • 33.

      Container Networking Model (CNM) of Docker

      2:28

    • 34.

      Docker's Native Network Drivers

      4:05

    • 35.

      Demo: Create Docker Networks

      1:41

    • 36.

      Demo: Working with Docker Networks | Connect, Disconnect, Inspect & Clean

      5:01

    • 37.

      Demo: Ping one Container from another

      4:19

    • 38.

      Never lose a "bit" of your data!

      5:26

    • 39.

      Demo: Working with Volumes | Create, List and Remove

      3:33

    • 40.

      Demo: When Containers meet Volumes

      3:45

    • 41.

      Demo: Working with Bind Mounts

      2:35

    • 42.

      Demo: Hosting Containerized 2048 game!

      3:08

    • 43.

      Introduction to Docker Compose

      1:09

    • 44.

      Demo: Installing Docker Compose on Linux

      0:53

    • 45.

      Demo: Structure of Docker Compose file

      6:57

    • 46.

      Demo: Wordpress on Compose

      7:20

    • 47.

      Demo: Introduction to Docker Compose CLI

      2:51

    • 48.

      Introduction to Container Orchestration and Docker Swarm

      6:47

    • 49.

      Can Swarm handle failure?

      1:31

    • 50.

      Demo: VirtualBox installation

      1:29

    • 51.

      Demo: Docker Machine Installation

      0:37

    • 52.

      Demo: Setting up the Swarm Cluster

      2:22

    • 53.

      Demo: Initialising Swarm Cluster

      1:54

    • 54.

      Demo: Working with Swarm nodes | List and Inspect

      1:44

    • 55.

      Demo: Creating a Service on Swarm

      3:45

    • 56.

      Demo: Making a node leave your Swarm

      2:47

    • 57.

      Demo: Scaling and updating with Swarm

      3:25

    • 58.

      What about the more popular one?

      3:30

    • 59.

      Kubernetes: An origin Story

      1:49

    • 60.

      Kubernetes: Architecture

      5:30

    • 61.

      Demo: Bootstrapping Kubernetes Cluster on Google Cloud Platform

      19:35

    • 62.

      What are Pods?

      1:51

    • 63.

      How to operate Kubernetes? Imperative vs Declarative

      1:57

    • 64.

      Demo: Working with Pods: Create, analyse and delete (Imperative and Declarative)

      9:41

    • 65.

      Life-cycle of a Pod

      1:15

    • 66.

      Demo: Managing Pod's lifespan with Life-cycle Handlers

      3:04

    • 67.

      Demo: Adding Container's Command and Arguments to Pods

      3:27

    • 68.

      Demo: Configuring Container's Environment Variables with Pods

      4:33

    • 69.

      Labels, Selectors and Namespaces

      1:50

    • 70.

      Demo: Working with Namespaces

      3:47

    • 71.

      Demo: Pod Resource management

      4:34

    • 72.

      Kubernetes Controllers | Concept and Types

      0:54

    • 73.

      Introduction to Replicasets

      1:08

    • 74.

      Demo: Working with Replicasets

      6:41

    • 75.

      Introduction to Deployments

      1:05

    • 76.

      Demo: Working with Deployments

      4:37

    • 77.

      Introduction to Jobs

      1:15

    • 78.

      Demo: Working with Jobs

      3:02

    • 79.

      Introduction to Services and Service Types

      3:40

    • 80.

      Demo: Working with ClusterIP services

      3:45

    • 81.

      Demo: Working with NodePort Services

      3:34

    • 82.

      Introduction to Storage in Kubernetes

      2:33

    • 83.

      Demo: Mounting Volume to a Pod

      4:47

    • 84.

      Demo: Mounting Projected Volume to a Pod | Secrets

      4:01

    • 85.

      Demo: Good old MySQL Wordpress combination with Kubernetes

      7:47

    • 86.

      Blackrock Case Study

      1:34

    • 87.

      Node eviction from a Kubernetes Cluster

      2:33

    • 88.

      Demo: Rolling Updates | Rollout, Pause, Status Check

      3:52

    • 89.

      Introduction to Taints and Tolerations

      2:22

    • 90.

      Demo: Scheduling the Pods using Taints

      8:48

    • 91.

      Demo: Autoscaling Kubernetes Cluster using HPA

      3:33

    • 92.

      Demo: Deploying Apache Zookeeper using Kubernetes

      18:47

    • 93.

      Pokemon Go Case study

      2:40

    • 94.

      On-premise Kubernetes or Managed Kubernetes on Cloud? Make a choice!

      2:46

    • 95.

      Demo: Setting up Google Kubernetes Engine Cluster

      5:39

    • 96.

      Demo: Accessing GKE Cluster

      4:08

    • 97.

      Demo: Persistent Volume and Load Balancing on GKE

      6:49

    • 98.

      Demo: Kubernetes on Microsoft Azure Cloud

      11:55

    • 99.

      Demo: Extra - Docker UI with Kitematic

      8:37

    • 100.

      Demo: Extra - Minikube Series | Installing Minikube

      2:15

    • 101.

      Demo: Extra - Minikube Series | Getting started with Minikube

      10:20

    • 102.

      Introduction to Serverless Kubernetes

      2:42

    • 103.

      Activating Cloud Run API on GCP

      3:35

    • 104.

      Your 1st Service on Cloud Run

      5:17

    • 105.

      Conclusion

      0:50

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

2,417

Students

--

Projects

About This Class

Update 2021!!

  • Introduction to Serverless Container Platforms.
  • Getting started with Cloud Run and running your 1st container service on Cloud Run.

Containers

Containers are like that smart chef who can feed a whole family with just a bowl full of rice, and that's not an exaggeration at all! Containers are empowering businesses to scale fearlessly and manage their web apps hassle-free. They are the prime reason why micro and small enterprises are migrating to Cloud. All of this has undoubtedly led to an enormous demand for professionals with Containerization skills.

Which skills do you need?

  1. A platform to Create, Run and Ship Containers... like Docker.

  2. A strong tool to Control/Manage/Orchestrate your containers... like Kubernetes!

This Course takes you on a wonderful journey of learning Containers using key components of Docker and Kubernetes. All you need is very basic knowledge of Linux fundamentals like files and processes along with a bit of Linux command line.

The Containerization Journey with Docker:

Calling Docker the most widely used containerization platform would be an understatement. It has literally become synonymous to containers! Following topics covered under this course will solidify the logical base of this statement.

  • You can only love technology if you know how it works, and that's exactly why you will be learning Docker architecture and how its Components work.

  • At first glance, Dockerfile might seem like just another file describing app specifications. That's because it is probably the simplest yet efficient way to perform app building from scratch.

  • Docker CLI is intuitive and is inspired by your friendly Linux CLI. So adapting it is a piece of cake!

  • Docker images and Containers are the most portable and reliable way to ship your micro-service or web application without worrying about questions like "will it work on his infrastructure?"

  • Once you are fairly familiar with containers, Docker Networks and Volumes will open a whole new world of opportunities. Your containerization will become more reliable and will start serving its true purpose.

  • Docker Compose will combine all of the learning and take it to the next level with inter-dependent multi-container applications.

Once you have learned all of this, you will be craving to know what else can you do with containers and how you can take your containerization skills to the next stage!

The Orchestration Journey with Swarm and Kubernetes:

"With Great Power, Comes Great Responsibility"

Similarly, With Great amount of containers, comes a greater amount of orchestration!

  • You want to deploy 4 nodes on your cluster but can only afford to have one SSD node. And you gotta make sure that it only hosts containers which demand SSD explicitly. What to do?

  • You don't want to have idle containers chilling around your nodes and not serving even 10% of their capacity but you also want to make sure that your customers don't hit 404 when traffic is at its peak. On top of that, you don't have time or manpower to keep your number of web-server replicas in-check. What to do?

  • You are a pro-on-premise Kubernetes expert but your next project happens to be hosted on a public cloud platform like GCP or Azure. You're not scared but a little push will help you a lot! What to do?

This course is a one-stop answer for all of these questions. It covers both Kubernetes and Docker Swarm and makes sure that you are confident and capable to make your call when the time comes!

Even though a container orchestrator is nothing without containers themselves, Kubernetes seems to be the biggest breakthrough in the world of DevOps. This course explains Kubernetes from the start. No, I mean LITERALLY from the start (Origin! It,s an interesting story). It covers all of these important topics with examples so that when you finish this course, you can use and appreciate containers as well as we do!

  • Kubernetes Architecture (Components, States, Nodes, Interactions)

  • Kubernetes Objects (Pods, Handlers, Workloads, Controllers, Services, Volumes)

  • Operations (Sorting, Configuration, Scheduling, Scaling, Deploying, Updating, Restricting)

  • Application Examples (All-time favorite Nginx web server,Custom Landing Page, Stdout Logs, Wordpress blog with MySQL, Apache zookeeper etc.)

  • Kubernetes as a service (GCP, Azure)

  • Case studies (Blackrock, Niantic)

With that said, see you in the course!

NOTE: Course Codes Can be Downloaded from this Link

Happy Learning!

Meet Your Teacher

Teacher Profile Image

Cerulean Canvas

Learn, Express, Paint your dreams!

Teacher
Level: Beginner

Class Ratings

Expectations Met?
    Exceeded!
  • 0%
  • Yes
  • 0%
  • Somewhat
  • 0%
  • Not really
  • 0%

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. CMC Promo: Hi. Welcome to this container masterclass. Are you looking for a new or better job in develops? Are you interested in making a long term career as a Dobbs in Jinya? Do you think that containers, docker and communities are the best skills to pick up? Well, we must say your choice is great. Containers are one of the most game changing advances in technology. Industries on over the world are making their app development and deployment process faster , cheaper and more reliable. At the same time, even small startups are not hesitating to skill, since the financial risk and resources have lowered significantly. With such a large scale acceptance across the globe, containers have genuinely become a movement. As you might have guessed off. This has also resulted in significantly increased demands and opportunities for professionals and certified experts with containerized ation skills like docker and kubernetes. That's why if you look at Google trends, you can easily tell that these technologies are showing no signs of stopping. So if you want to learn containers from the basics and take your skills to a professional level, you're at the right place in right hands. We are a group off experience ingenious educators and certified experts on docker and communities, and we helped craft it discourse to make sure that with just basic knowledge off limits, you can proudly and peacefully learn the whole content. Speaking off content off the course, Docker is the most popular contain ization platform and Kubernetes is the most popular orchestrator, so it only makes sense for a masterclass to cover both. Totally starting from setups and Dr files, this course covers everything, including docker images, containers, networks, storage, docker compose and docker swarm. Once you have solidified your concepts of containers, you learn about the power off orchestration with kubernetes without rushing at all. You learn communities, architecture, workloads, services, volumes on a lot off orchestration tasks with interesting examples. You will feel the sense of accomplishment when you will bring your Web servers a WordPress block. Your favorite game are even an Apache zookeeper cluster tiny containers. You'll feel connected to the industry with really case studies off popular companies and products which used containers in recent times when everything is going to cloud, how can really contain us behind you will learn how to take your knowledge on hosted communities on public cloud platforms like Google Cloud and Microsoft Azure. That's not all goods and quizzes will make sure you don't make meadowsweet, syntax and semantics. Cheats will make a command divisions fun and quicker. Certification guidelines will help you choose proper exams and deter mined practice directions. We also acknowledged that containers are a growing technology, so both darker and communities are sure to help feature updates and new topics to learn. We will keep this course up to date to make sure you grow with containers as well. So what are you waiting for? Let's start or wonderful journey with contain a masterclass. 2. Course Outline: Let's talk about the outline off the course. We will start off with an introductory section where we will cover basics off of applications, containers and docker. Then we will take a deeper look into the architecture off Dhaka and learn how to write DACA files. At the end of the session, you will receive your first T cheat off this course. Then we will understand and work with docker images and containers using docker command line after understanding container networking model and how containers communicate in different situations, who will implement different doctor networks and play around them. Then we will take a look at different storage objects off Docker and create something using them, which will be both informative and fun. Once we're familiar with most of the doctor objects, we will take them to the next step where we can create multiple resources from a single file using docker compose. Then we will understand what orchestration means and do some basic orchestration. With doctors warm, we will make a close comparison between doctors warm and kubernetes, and when you are capable enough to make your choice between both, orchestrators will move to communities architecture and understand how it works. Then we will take a look at parts and other workloads off communities and perform a lot off orchestration for different applications. We'll also take a look at one of the most significant case studies off Kubernetes. We will see how to set up a news, hosted communities on cloud with demos and really unique case study, and finally will conclude the course with insight on certification exams. What these learnings mean for you and what kind of professional prospects would be potentially open for you. But that won't be the end off it. There will be a lot off upgrades and bonuses coming up regularly. Oh, and by the way, you can find all of the cords like Yamil file and Dr Files in the Resource Is section off this lecture. With that in mind, let's start learning. 3. How to make a web application?: before we begin to understand and work with containers in general, it is useful to take a quick look at how do we make Web applications some off? You might even ask what is over my application, since the term is quite widely used, but it is quite superficially explored. Just take a look at some of these examples. Productivity tools like G suit, social media giants like Facebook, video chatting applications like Skype entertainment platforms like Netflix payment services like PayPal or even the learning platform Like you. Demi itself are all Web applications in one way or another, which means you're using Web application interface at this very moment if we have to define it. Ah, Web or Web based application is any program that is accessed or a network connection using Http rather than existing within a devices memory. Off course. The definition is flexible, and you may choose to use one protocol or another. But on a broader perspective, it is all about not using your device like PC tablet or mobile for computing purpose. Instead, we let those mighty costly and reliable service do the heavy lifting, and we just access the result off are requested data from some Web interface Like http, this has so many advantages with just gon be overlooked. First off, all the performance off the applications will not be determined or limited by the hardware they are run on. It also means that we can almost say goodbye to those long lists of hardware requirements that we used to check before trying any new software. The requirements are still there, but they're quite standard, perhaps also improve speed. Now you might think that speed is just another performance perimeter. But here speed can refer to non Laghi performance, faster updates and overall faster growth off the organization. In general, the speed is also representative off shorter product development cycle. Since rollout off updates will be faster and user feedbacks can be taken and addressed quickly, as we just mentioned. Since the hardware requirement toe access, such abs are fairly generate. Like basic consumer toys and Web browsing capability, these applications can be accessed by wider range of devices by more and more consumers. In fact, many of the popular social media and utility APS also so variable devices the policy off not owning but accessing the data also improves the overall security off both consumers and hosts. And all off it leads to a better idea. Economy. It is not just about APS becoming cheaper after rise off her maps. Many revenue models, like Freemium be as you go and add based revenue generation have grown significantly. Not only Dad. The transactions have become more transparent on all ends like businesses, consumers and even the government's. Finally, the nightmare of business makers, which used to haunt them for decades, has become quite a Disneyland. Yes, we're talking about scaling companies don't how to invest in tow. Underutilized hardware taken skill as they grow since now, have we have a fair idea off? What are we maps and why do we use them? Let's get straight to the business. There are three steps to the process off making Web abs first, make it or build it on the suitable environment. Sickened. Wrap our packet with necessary support and instructions to ship or deliver it to the intendant line for consumer and finally rented all your machine or hosted on your server for others to access it. In next lecture. Who will get started with creating Web applications 4. Demo: Simple Web Application: Let's install Engine X Web server on our local machine, and the next Web server is the most vanilla example off a Web application. For your information, we are running open to 16.4 on this machine. And now let's start with switching to the root user privileges. As you can see, we have moved to root privileges. Now we can start our installation by first ofall downloading the PGP or pretty Good Privacy key for Engine X. The purpose off doing so is to make sure that when we install in genetics, the binaries are verified. The key has been downloaded. Now let's switch to E T. C slash AP directory with a less common let's list out the contents. We have a bunch of files here, but what we need is sources Start list file. So let's open sources dot list with Nano text editor. You can use any text editor you like, but in this course we will mostly stick to Nano. As you can see, this file contains a lot off links. Thes links our sources for open toe to find updates at the end off the file based thes two lines. These lines indicate the update part for engine X application when it gets installed, and we updated further in the future. It saved the file and exit Nano just to make sure we don't have any dangling into next installation. Then app get removed and the next common. This command will make sure that any off the previously installed instance off Engine X installation is completely removed. Now let's an app get of date to reflect the changes we have made in sources. Start list file. That's you see D command twice to go back where we started. Now let's install Engine X using apt get install in the next command. Once the installation is complete, we can verify it by going toe Web browser. An opening local host on Port 80 Well, the installation was successful. Engine X is earning properly. This was an example off installing and running the most Simple and vanilla Web application engine X Web server 5. A forest of VMs!: We have seen the advantages off where maps and how great here. But it doesn't mean that this coin doesn't have a flip side. There's just so many robots available on the market places. There are so many clones off some of really good ideas, and also many Clickbait applications, which turn out to be nothing but endless add boards. And unfortunately, even tat market is showing no signs off, stopping at all. And while the liberty off choosing the APP is still in the consumer's hand, all off these abs are being hosted, and they're generating traffic, occupying physical memory and storage in some off the data centers While working with mediums. It is pretty common toe have issues where the application was working smoothly on developer environment, but it was a train wreck on office machine. On even worse, it crashes on the client machine. Since we help transition from waterfall toe agile and a gentle develops models, updates are rolling out faster than ever. You. And if you are unaware off these models, just ask yourself this. How often did you receive updates for soft rest 10 years ago, and how often a year do you abrade the Facebook app on your mobile. While faster updates are good for businesses and consumers, it brings huge responsibilities on system Edmunds to make sure none of the updates compromised the stability off the app and to reduce downtime as much as possible. We end up using even more V EMS, all of the's Internet enabled application and rise off data. Science is generating huge amount off later and populating up thousands off servers every day with data basis or were all the use it off. Williams have just increased significantly. You do an option off their maps and micro service models, and, as you might have imagined, it has resulted into nothing but forests off servers all around the loop. 6. Hello Containers!: containers on abstraction at Application Layer, which packages codes in dependencies together, let's city a great and expand this definition further. Cadenas on abstraction at Application Leah, which packages cords and dependencies together. It means instead of just shipping the applications, container ship the application on during time environment as well, and he still managed to remain small sized. How? Let's compare them architecturally with Williams. In a traditional William architecture, we have, ah, hyper visor like hyper V or give'em on top of hardware infrastructure. These are also called type one hyper risers, since they don't need a host operating system. The guest horse a provision on top of hyper wiser and they acquire that isolated virtual environment. In some cases, we get type two hyper wiser, like Oracle's Watch Jewel box, where we do need a host operating system, and the rest of the part lays out pretty much the same. And this is how we in dysfunction, in a very broad sense, coming back to containers. The biggest difference compared to Williams is they don't have guessed operating systems, container and time environment is used instead of hyper wiser. What is it you may ask for now, Let's say it is software which manages and lands containers. Containers contain the application court on the dependencies, as we have just seen, The dependencies don't only mean external or third party libraries. It also means always level dependencies. The logic behind such implementation is all of the Lennox variance share the same Lennox colonel well, more or less so. There is no point in duplicating the same set of files or and or in multiple Williams if all containers can just access them in their own isolated environment. With that said, what about the files, which are uncommon or to be free size? The files, which are specific to the Oise? Well containers will contain them along with the application. And since the process off making the containers and running them are done by the same container and time environment, there will be no conflict off environment. If this information is too sudden for you, don't worry. The intention off mentioning all of this is just to let you know that how containers can attain same level off isolation as Williams, but while sharing the resources with host always instead of duplicating them and what happens because off that well containers consumed less storage and memory without stretching the facts at all. Gigabytes literally turn into megabytes. This way. Shipping them is easier as well. We don't ship the whole Wiens or a long list off instructions. We just ship ready to run containers. And since all of the necessary dependencies are also packed with the containers, is it worked on the developers environment? It will work on your machine as well, since we have reduced that resource is scaling becomes easy and cheaper. Even though you need to create 10 more replicas off a back and container, you probably want how to spend money on buying or renting a new server. In fact, if you need to roll out updates, you can still keep your applications running by extending your number off replicated containers, and you may achieve zero downtime. All of this sounds attractive and groundbreaking, but if we relate this to industries who are actually using containers well, Google pioneer using orchestrated containers years ago when they started facing or whelming amount of data. These days, companies like Expedia, PayPal and GlaxoSmithKline are Wallenda re providing themselves as the references and case studies apart from them. educational institutions like Cornel University and gaming giants like Niantic, which became a huge success after Pokemon go on all using containers, companies are gradually migrating to containers as many off you might already know. Drops jobs are increasing rapidly and containers are an essential part off the whole develops movement. In the next lecture, we will finally introduce ourselves with Docker and we'll get started with learning it. 7. Hello Docker!: It is time that we get started with the key player off our course. Dr. Docker is an open platform for developers and cyst amendments toe build, ship and run containerized applications. In other words, it is a container ization platform. His doctor the only platform off its kind. Well, no, Certainly there are others like rocket, but doctor is definitely the dominant one. By the time discourse is being created, doctor is tried and tested and it is a top choice off industry unanimously. It means if you want to sharpen your container ization skills, Docker is potentially the best choice for various reasons, such as more industries are using it so it can land you on more religion jobs. It is open source and has huge community support. Ah, lot off third party applications are available to support DR. Although it is built for Lennox, it can be used on Windows and Mac OS. For those who just don't have any other choice, there are other aspects as well, but there is no point on flooding your heads that information, which you might not be able to relate to, who will get into those later into this course. In the next lecture, we will install Docker on a Lennox machine 8. Demo: Installing Docker on Linux: In this demo, we will install Docker on open to 16.4 or even to Zaenal. Let's start off with running a standard apt get update Command. Once we're done with that, let's install some off the prerequisites, such as at transport https to make sure that our machine can communicate through https authority certificates, Go and Software properties Common, which contains some off the Golan objects which will be used by Dr and the installation is successful. Now let's download GP geeky for Docker and added toe on machine. And to make sure that we don't get a long list off processes which happen in the background , let's use hyphen f s s l flag to keep our reserve as small as okey and it shows OK, which means we got our GP geeky. Let's verify this key using pseudo app key fingerprint common. We can verify that we have received the correct key by searching for the last eight characters off the fingerprint, which should be zero e. D. F. Cd 88 This information is provided by Dr Itself, so that is not much for you to figure out. And yes, our key does have those characters as its last eight digits. Now run this command toe. Add a repository called stable and at the content off download dot docker dot com slash Lennar's slash Cuban to on it he helped provided the flag ls be underscored. Release hyphen CS to make sure that Docker provides correct files, which means files for Urban two senior are open to 16.4 to our stable repository. Let's run the update again to reflect the changes. Then Sudo apt get install Dr C E to finally install docker hyphen C E stands for Community Edition, which is one of the two additions provided by Docker. The other one is called Enterprise Edition, which is not free, so we won't be including it. In this course. The process has ended and we have successfully installed Dr CE or Docker community addition verify that our installation is successful by running pseudo docker run Hello World Command . This will run a container called hello world, which would only be possible if the doctor installation was successful. You don't have to pay much attention to the processes which are going on because we will be exploring all of them insufficient death in further models, as it says are, installation appears to be working correctly. You may have noticed that we have been using root privileges over and over to make sure that you can run Docker from your regular user as well. Let's perform a few more steps. First, let's add a group card docker using pseudo group at Docker. Now let's add our user, which is 22 this stalker group, and provided root privileges. No, let's try to run hello World container without root privileges with just Doctor and Hello World Command and we get the same results. 9. Demo: Containerizing Simple Web Application: in first demo we had installed and run engine X on open to 16.4 locally in the demo. After that, we installed Docker. You might find a pattern here, and you might have been able to figure out that in this demo we are going to run in genetics as a docker container. Unlike Hello World Container, we will do this in a bit more elaborate way. Let's start with pulling an image called Engine X latest from Docker Hubs Engine X repository by running the command Docker image. Pull Engine X, Kahlan Latest. This will download or pull an image called engine X with latest attack, which can later be run as a container. Let's see if we help God. Our image on Docker Images Command to show the list off images, and here we go. We helped to images. First is Hello World, which we used in the last demo, and second is Engine X, which were using in this demo. Both of them have attacked, called latest and they have different sizes. Now let's run this image as a container using docker containers uncommon, followed by I T D flag and name or Cantina Web server engine X with hyphen b common. We are mapping the port 80 80 off our local machine to contain a sport 80. And finally we're mentioning the image name and the next latest, which we have just pulled recently. What we got is a container I d off the engine X container. I know all of this terminology sounds pretty new and pretty abrupt, but don't worry in this demo, our only purpose is to run into next successfully. We will go through all of these terms in sufficient details. When the time arrives. Let's verify that all container is running by running the command docker PS hyphen A. And as you can see, Web server engine X container is running, which is built upon image called engine. Next latest. Finally, let's see the output off this container by going to the Web browser and opening our local host sport 80 80 and it works successfully 10. Stages of Containerization: in previous model we got introduced to containers and then an instance off it. In this section, we will dig deeper into the process off Contain ization with reference to docker before understanding Doctor in detail, it will be efficient to visit a few times. Briefly doctor, files get built, Docker images get shipped and containers are run. You can consider Docker file as blueprint off Docker image if you remember. Well, we have already come across Docker Image and Docker container in our engine X contain a demo. So now that you know all of these three files definitely not in detail. But at least Wigley We can move on to the architecture off, Doctor and come back to these files later. 11. How does Docker Work?: No, the natural progression of talk would be. How does Dr Work Docker Ecosystem has a number of offerings where some of them are more useful than the others? We will begin with Docker Engine, also known as DACA in general, and we'll look at other important ones as we move further with this course. Let's take a look at the architecture of Dr Darker and the whole process off. Kontinen ization revolves around three main components. Docker Client, Dr Host and Docker Registry. Dr. Klein is the machine are medium, through which we as users, interact with docker. The two basic ways off interaction are doctor CLI with stands for command line interface on Docker AP Eyes, which again stands for application program. Interface commands can be directly used from clan terminal, whereas AP ice can be used to make some applications. Doctor Doctor, as we have seen in our earlier demo, both Dr Pull and Dockery on our commands covered under DACA CLI, we'll explore more such commands as a cover for the topics. Dr. Host Dr Host is the machine which actually performs the task off, contain ization. It runs a program or piece of software called Docker demon, which listens to and performs actions. Asked by Docker client Dr Niemann builds docker file and turns it into a docker image. Doctor files and darker images can directly communicate with Dr Demon. Either images can be built from docker File are they can be pushed or pull from Dr Hub. In any case, that task is to be performed by Dr Host using Docker demon. Dr. Images can also be done as containers. Containers can communicate with Dr Demon by Dr Images. In other words, any changes made to the container are also reflected on the docker image. Temporarily well explode these parts individually soon enough. It's possible that Dr Klein and Dr Host are actually the same machine as well. But the function off Dr Klein as a piece off software is limited toe passing the user input and displaying output provided from Dr Host Human Find. Docker Registry has the simplest component off the locker architecture. It serves as a place to store docker images and to make them available to others. The engine X image, which we used earlier in our demo, was pulled from Dr Registry Dr Plan talks to Dr Demon Bi directionally where it passes the request and receives the results. Where is Dr Demon and Docker Registry can talk bi, directionally push and pull images. Let's some off all of the's three components off Doctor architecture. First of all, we have Doctor Client, which process requests through Dr Seelye and A P IES and receives results to be displayed. 10. We heard Dr Host, which also and Stocker Demon and works with docker images and containers. Finally, we hold Docker Registry, which acts as a universal place toe access available docker images. Now we can go back to those 34 months which we saw earlier. Dr Files, Dr Images and Containers, which respectively represent, build ship and run in the next lecture will take a detailed look at how Dr Files work. 12. A quick look at the format of Dockerfile: we can now go back to the 34 miles, which we saw earlier. Dr Files, Dr Images and Containers, which respect to, represent, build, ship and run. First, let's focus on docker file. It is a sequence ships set off instructions intended to be processed by Dr Demon Availability off such format replaces a bunch of commands intended for the build up off a particular image. It helps keeping things organized with time. It has also turned out to be the primary way off, interacting with docker and migrating to containers in general. As for working it, sequential instruction off Docker file is processed individually, and it results in a file, which acts as a layer off the final doctor image, which will be built. A stack off such sequence layers managed by a file system, becomes a docker image. The purpose behind this is to enable cashing and ease up troubleshooting. If to knocker files are going to use the same layer at some stage, darker demon can justice. He used the pre created layer for such purposes. No, let's look at the structure used for writing the doctor files. Firstly, it is a file with no extension at all. And a general rule of thumb is to name the file as docker file with D capital and no extension, you can use any text editor to create the file. Just make sure you don't put an extension. The purpose behind doing so is to make the file compatible toe pass for auto builders used by Dr Toe. Build the images although it is not an ironclad rule. And you can name the docker file according to your convenience as well, which we will look in the future Demos. What you see inside the docker file our instructions, Toby passed on the instructions can be generally divided into three categories. Fundamental configuration and execution instructions. In next lectures, we will write our first docker file and understand these instructions one by one. 13. Demo: Fundamental Instructions of Dockerfile: Let's write or first Docker file and understand its fundamental instructions. Let's see, what is your current working directory? We are in the 20 directory, which is the user's name on that home directory. It is quite likely that you would also be in a similar location once you have downloaded the material provided in courts and lecture notes and unzipped it. You should also have a directory called CC underscored Docker, where C, C and D are capital. We're only looking one level deep three in our present directory. And if three is not available on your machine for some reason, you can verify the CC. Underscore Docker directory simply using L s command. Now let's navigate to the CC docker directory Just to get you familiar with the structure of the directory, you will find one directory for each segment or module and subdirectories for respective demos. If you don't intend to write the files by yourself, why learning you can simply use the appropriate files for each demo and run the results. Let's go further toe as to directory, which contains all of the required cords and files for this segment. We are in the S to at the moment. Finally, let's navigate to the directory name D one and verify that we are anti right, please, for the Let's create an empty Docker file, which touch command. I'm creating this file because I want to show you step by step, how to write a dr file, But you will find a predate on docker file in the directory were using Nano as the text editor, But again, you're free to choose the one you might be comfortable with. And with this, let's open the empty Docker file and start writing it. The first instruction that we're providing is the A R G R Ark. Instruction art is used to define the arguments used by from instruction. Although it is not necessary to use art and not using so does not cause any harm to the resulting image directly. Sometimes it helps keeping parameters such as versions under control. Here we have defined argument. Good underscore version equals 16.4 which means that we are going to use something which will have the court worsen 16.4 in a pretty tough since. Remember in a very rough sense, you can treat it as a declare a bill directive in general programming such as Macros. But again, this argument will only be relevant for the from instruction and next is the from instruction from is used to specify the base image for the result and docker image that we intend to create in any case from instruction must be there in any doctor file, and the only instruction that can be written before it is the art which we just saw generally from, is followed by an operating system image or an application image, which is publicly available on Docker Hub. Here we want tohave open toe as our based operating system image with court version, Our toys version 16.4 So the name off the image is followed by a Kahlan, and argument is mentioned in curly braces, preceded by a dollar sign. As we have already mentioned in our instruction, our court worsen its 16.4 so it will be passed as an argument, and the based image for this doctor file will be considered as we've been to 16.4 to add a little more substance to the image were also including a set off run and CMD instructions, but we will explore their meanings and applications in next demos. For now, let's just save this file again. It is important to remember that we must not give any extension to the docker file and should mostly name it. As Docker file itself, it is time to build the docker file and turn it into an image. Let's do it with Dr Bill Command. The hyphen D option is used toe tagged the image or, in other words, named the image to make it easily recognizable. We'll attack the image as I am g underscore from. And the dot in the end directs docker to the docker file stored in the prison directory. As you can see the images being built up step by step, let's understand each of these steps first, step off storing argument was fairly simple, so it finished quickly. Second step involves setting of the base image, and it is doing so by pulling multiple file system layers from Docker Hub and stacking them in proper hierarchy. Once it is complete, it moves to the third step, which is to updated toys, and we have already provided the permission with wife flag where? Why stands for yes, Once he steps are done, our image is big. We can very fight that the image is built. Why are Docker images come on? As you can see, we have four docker images among which I am g underscore from is the one which we created recently means 11 seconds ago while others are previously created or pulled. 14. Demo: Configuration Instructions of Dockerfile: in this demo will go a step forward with writing doctor file and will explore configuration instructions. Again, We're in S two directory, which contains individual directory for every demo. Let's navigate to directory called D to There we go. As you can see, there is a DR file already present in this directory. Let's open it with Nano. As you can see, this docker file also has a base image off open to 16.4 mentioned using from instruction as described in previous demo. But this time we have skipped using our instruction and directly provided the version number. Now we have run and envy, which are configuration instructions, although they are not the only entries in the list of configuration instructions. But these are the ones that we will cover. Income demo. Let's go through them. One by one, John asks Docker to execute the command mentioned with it on top off the base image, and the results are committed as a separate layer on top off the base image layer. Here we have more than one mentions off run, and each one creates its own separate lier with the first run instruction we have provided to commands toe update the S install Carl and clean up later on. Where is the second run? Simply makes a directory named Cords Under Home Directory. Don't confuse it with our host machines. Home Directory Toe Here we're talking about based image, always Home Directory and the courts will be created on that base image, not on our host machine. Then we have used E NV, which is another configuration instruction. It does what its name suggests. It sets up environmental variables. We have used it three times to set user shell and log name environment variables just like the previous demo we have you CMD but we'll go into that later. Again. We will use the doctor Bill Command toe build this image. But this time we will tag it as I am G underscores on hyphen envy to separate it from the previous image. As you can see in this build, the first step is directly involving setting up the base image. Since we have skipped abusing our instruction, Step two will perform all of the commands used in first run instruction and will perform the commands off second run instruction, which is making a directory step 45 and six will set environmental variables as mentioned in the Docker file, and the super fast Step seven will get our image ready to run. Let's list out are available images with Docker images Command. These images are the ones currently available on the host. Our top image is that I m g underscored Run, envy, image. Now let's go one step further and run this image as container with doctors and hyphen i d d command the I e. D represents interactive teletype enabled and detached respectively. We're naming that to be running container as Kant underscores Run envy And the target image is I am G underscored Run and we which we have just created. The command was successful and we have just received the unique container idee provided by Dr for our Container. Here we have two containers earning, among which first is the one which we run recently. It is up means running for five seconds and it is ending the bash common. Now let's execute or containers bash command here The bash common and the process was running in background due to the detach flag set while running the containers. Now we had bringing it forward. As you can see now we're in the root directory off her. Kant underscores on in we container. Let's list out the directories here. Yes, the structure looks similar to a regular Lennox instance. Now let's verify the environmental variables, which we had set with Ian Reconstruction while writing the Docker file. As you can see, the user shell and love name variables are just as we had set them up. Now let's navigate to Home Directory As we list it out. We can also verify the creation off the cords directory, which was supposed to be created from our run instruction off the docker file. Finally, we can get back to our host and Weinmann by exiting the container using simple exit command . 15. Demo: Execution Instructions of Dockerfile: we are back in our S to directory. Let's navigate to directory the five and list out the contents off it. We have a Dr file for this demo stored in here. Open it in a text editor. Multiple new instructions. We have been using CMD a lot in the previous demos, but we will dig deep into it in this demo. Let's start with the most basic yet important instruction from will set open toe trustee as the base image for this doctor Image label is a key value pair, which adds the meditator toe, the image we have added to labels as key value pairs in multi line argument for a label instruction create a key has a value civilian canvas, while version key has 1.0. Next one is a run instruction, which will a bleed the package list off the base image in not interactive manner. Then we have entry point. As the name suggests, Entry point will allow the user to configure the container at a starting point. In other words, Entry Point will bring back the container to the starting point whenever a container is said to restart. For this talking image, the entry point is defined in exact form, which is also the preferred one. It will execute Ping five times when the container starts running. Last but not the least. It's CMD instruction we have seen so far that CMD provides the default command to the executing container. But if Entry Point is mentioned in the docker file, then CMD will always be executed after entry point. When CMD is defined in exact form and does not contain the executable, then it will be treated as a parameter off the entry point instruction. There can only be one cm the instruction available in docker file among multiple CMD instruction only. The last CMD instruction will be in effect for this docker image. CMD instruction is an exact form without executable, which means that it will provide the local host as the parameter for the executable off entry point, which is thing. If we sum up entry point and CMD here, we have set that maintainer toe pink the local host five times. As soon as the container is up and running, let's exit from the docker file and build our image sequence. Shelly, we will build Docker image based on the docker file in current directory and tag it as I am g underscore and three cmd build Context is sent to Dr Demon and it will download the open toe trustee image from the Docker hub to our local doctor Storage. Now based image has been downloaded and it is running in an intermediate container toe build open toe trustee environment. To build our application, Steptoe will create labels for our docker image. Step three will execute Theron instruction, which will update the open toe trustee base image and comic. The result in a new intermediate container step for will set the starting point off the container at ben slash ping. And the last step is CMD instruction, which will provide the local host as the parameter to the entry point to execute and to start off the container. At the end, all the layers will be stacked sequentially by Dr Demon and a final I am G underscored entry CMD image will be created with its image i D and latest tag. Let's check out the list of images available in our local doctor storage. Which doctor images Come on. As we can see, I am g underscore entry CMD Colin Latest has been built and stored on our local doctor stories. It's time to run a container based on that image type doctor run, hyphen, hyphen name, con underscore and three cmd, followed by I am G and three cmd but its end up and here we go Cantina is pinging our local host as for entry point and CMD instructions, and it is successfully pink. The local host for five times five packets have been transmitted and received successfully without any packet loss, which means or application is earning perfectly. Now let's check that status off on entry. Cmd with Dr Pierce hyphen A. Common as we can see, the container has exited with an error. Court zero after finishing its default task, which means containers execution was successful. 16. Demo: Expose Instructions of Dockerfile: Let's navigate to the six directory and list out all of the contents off it. We have a docker file for this demo. Available here. Opened the docker file in the text editor. As we can see, it contains four. Dr Instructions from instruction will set the base image Cuban to Colin, 16.4 as its base image for this docker image. Next instruction is run, which will update and install into next on open toe 16 points out of four base image. We will chain Subcommander off run instruction with Logical and Operator, which means in order to run second subhuman first common should be a success here. If we consider the sequence apt, get update off the base image should be a success. In order to install engine X after engine X installation AB get removed. An arm r f slash war slash lib slash ap slash list will clear up Local depositories off retrieved packages. Next instruction exposed is a type of documentation which will inform Dr about the board on which the container is listening. Keep in mind it does not publish the port, but it fills the gap between the doctor image builder and the person who runs the container . We have documented with exposed instruction that this engine next container will listen on port 80 cm. The instruction will make engine X application run in foreground by turning off engine X as a demon process. Exit from docker file. Build the Docker image with Dr Bill Command from the doctor file available in the present directory and tag it as I am g underscore exposed. The bell context is sent to Dhaka Demon as we already have open to 16 points in all four emits in local doctor stories. Dr Demon does not download it again. It is cashed in Step two. Jane Drunk instruction is being executed one by one. First, it will update the package index off the base image open to 16 points out of four. After successfully a bleeding, the image and Gen X will be installed on the base image and at the end, local reports off retail packages will be cleared up. Step three is to expose the port 80 off the container in order to inform Dr that engine X ab will listen on Port 80. The last step is setting up the deformed command CMD, which will set the engine x app as the foreground process in this container. Our image has been successfully built and tagged as I am g underscore exposed. Let's list out all the images in our local doctor storage. There we go. I m g underscore Expose has been successfully created and stored on DACA. Let's run a container based on I am Jay Exposed Image. My doctor and hyphen ITV hyphen hyphen arm Adam flag will automatically remove the container once it has stopped. Follow it with container named con Underscore exposed, followed by hyphen p 80 80 Colin, 80. Which means map the Container Sport 80 with host sport 8080 in order to access engine ex service and finally really give the image name, which is I am G underscore expose press enter and we got to contain a I D. That's list out. All the running and stop containers with Doc appears hyphen. A command or con underscore. Exposed is up and running for seven seconds. The containers Port 80 has been mapped on Port 80 80 off the host so that we can access and the next Web server on our favorite Web browser. Now go to your favorite Web browser, minus chrome and type http local host calling 80 80 in the and a spa press enter and we concede the default home page off injure next Web server. 17. Demo: Miscellaneous Instructions of Dockerfile (Part 1): Let's have a reality or PW d check. All right, We are in Demo eight directory and has always been released out. The components we can see to Dr Files now, before you can raise your eyebrows with a ton off surprises like Why? Why do we have to Dr Files in one directory? Isn't that a bad practice? Wouldn't doctor get confused? Allow me to clear a few things here. There definitely can be more than one doctor files in a repository or a folder, but they can not be named as docker file. Firstly, you're always won't allow that. So there is not much to argue. And secondly, naming it as Docker file has just one purpose. Make image building command smaller. Using docker help Auto builder. If we simply have files for different names, which are essentially Dr Files, Doctor won't bother about it. It will simply build the file we mentioned with that. Out of the way, let's have a look at these files we have Child and Parent Docker file. So let's give proper respect to the parent. Henry, you at first. All right, so this is a docker file and the right up It's pretty simple. We just have three instructions among which do are fairly familiar to you. The Middle window is a new entry on our Learning Co we have on build instruction. Its purpose is pretty simple. It allows us to specify a command which will be passed on to the next image that will use this image as its base image sounds confusing. Well picked this example. We help open to 16.44 as our base image, and we will create some image from this docker file. Now, if that image will be used as base image off another doctor file, it will be just like 1 to 16.4 since CMD can be over written by next docker file CMD or entry point instruction. So if we want to help some changes persisting while using this image as based image like having a file called greetings dot txt, created in the temp folder we need to use on bill instruction, we are a coined the sentence Greetings from your parent image toe, TMP slash greeting start txt and expecting it to exist whenever we used the image created from this doctor file as base image with that clear in our head. Let's exit this file now Let's open child Docker file. We just have to instructions. 1st 1 mentions the base image called Papa Open Do latest and read it that come from. You may wonder it is the name off the image which we will soon build and we're running Bash with CMD instruction. I really we want Papa Bentos greeting start txt to be visible in this image. Now let's build the parent image using docker build hyphen F common, followed by the name off Docker File Target Image name and adopt to indicate the president directory. Similarly, let's build baby open toe image from Child Docker file. Check this out during first step off setting up base image. It is executing a bill trigger, which has been inherited from on bill instruction off base images. Docker file. Let's see if both of her images are listed or not. Yes, they are John, a container from baby open toe image and name it baby container. When we execute this container, we had straight to the root off its base images open toe us. Let's navigate to TMP director using CD and see if greeting start. Txt is present. Yes, it is here. We can also cap it and verify its content. Which is to seem that we had a court into it. We can exit this container since our on bill demonstration is successful. 18. Demo: Miscellaneous Instructions of Dockerfile (Part 2): Welcome to the Conclusive Lecture Off Docker file section. In this lecture, we will achieve three objectives. Understand and implement container health Check using docker files do the same with stop signal instruction, and while we are added, we will also contain eyes a sample flask application. As always, we will start by knowing our present working directory, which is the moon nine under CMC. If we checked the list of components, we helped three files this time. Apt Art by docker file and requirements start TXT, which is a text file. Let's explore them one by one, starting with abduct by. We're looking at a sample flask application toes or familiar with fight on and have worked with flask earlier will find this file a piece of cake and those who have not touched upon flask. Don't worry, there is nothing Incomprehensible. Flask is a Web server Gateway interface framework. In other words, in case off fightin, it allows Spuyten application toe talkto Web servers in orderto forward and receive web AP requests and responses. We have started our file with a simple import statement to import flask class from flus library or framework. If you're wondering why in the world. Would we have flask framework or Pitre installed? Hold your breath. Those pieces will join the puzzle soon enough as well. Next up, we're creating an app instance from flask class. It's argument is name. This name String can be replaced by any other that you like, but it is recommended to keep it name if we are running a single model application. When the flask app is compiled, name is replaced by Main, which will make our instance. The main instance. Next line is a decorator, which is a rapper to describe a function using another function as its argument. The purpose off this decorator is to shout the incoming requests to forward Slash, which is comprehended as local host Port 5000. Next, we're defining the function, which will run within this Web application. Instance. It is called C M. C. And it was simply help printing a string called Welcome to the Container, masterclass by civilian canvas as its returned value. Finally, we're instructing flask that if our instance is mean, which it is, then run this application and make it publicly available. Let's exit this file Next up. We have the smallest file in the whole course called requirement dot txt. If you remember, during container introductory theory, we had mentioned that containers reduced a long list off requirements. Witness it. We just have one entry in the file called requirement dot txt, which is flask version 0.12 point two. But we will not mended you to install that externally either. After all, containers are isolated and one mints so every installation should ideally happen during the imagery in time itself. Ideally, speaking off images, we need a doctor file toe build this app. So let's exit this file and open tap. Starting off, we helped open toe base image and we're running an update and installation off. Bite on pip and call. We're copping all off the contents off this host directory toe app directory off base image and making it working Directory. Next up, we're installing contents listed in requirements. Start txt. We could have simply mentioned flashed there, but this is a standard practice to list out your requirements in a separate file and install them using the file itself. It also makes the readability off the docker file simpler for other developers now that are prerequisites are set up. We can then app not be white as a pipeline application. Using CMD instruction before CM Lido, we have health check instruction. Health check is a way to perform a user defined or developer defined periodic check on container to determine whether it isn't desired situation. Also known as healthy or not, this instruction comprises off three aspects are three types off arguments in double time out and come on in total defiance a timeframe after which periodic health check will be depleted. We have kept it 10 seconds, which means health check will be performed on the running container every 10 seconds. Time out, little minds went toe back off. If the container remains unhealthy, backing off would imply to perform a container restart. This brings us to another question. How do we don't mind if the container is unhealthy? Dr acknowledges the fact that every container or application would have different definitions off being healthy. For example, in this flask application, just because resources are properly allocated and the container is running does not mean the application is working correctly. What if the Web server is not solving anything? What if we come across 401 or 404 errors where the desired webpage would not be available. It would completely kill the purpose off this application in the first place. That's why we help command or CMD argument. The argument executes commands followed by CMD, and the results define whether the container is healthy or not. So it is up to us to provide the proper commands which can correctly deter mined container situation. In this case, we're providing a command with logical our condition, which means either this are tapped. Our first command is calling local host on Port 5000 which would display the result off flask application. But we help attached a failed flag to it, which means that if the common encounters enter 401 or 404 it will not show any output. Not even that default response such as this speech cannot be displayed etcetera. In that case, second command will be performed which returns exit status. One reason for writing the second command in such a way is that health check instruction considers one exit status as unhealthy. So we are going the address serving flask application every 10 seconds, and as long as it doesn't encounter any solving era. It will not written exit status. One which will mean the container is healthy and in any case, it does encounter ever 401 or 404 It will don't exist. Status one, which will mean the container is unhealthy and t off. Such alterations will cause a back off. It is mandatory to write health check before CMD instruction toe always overriding it. Next is stop signal when we terminate a docker container Doctor sense Sick dome signal toe . The Lennox process responsible for running the container sick dome gracefully kills the process, which means it clears out all of the cache and memory before detaching the process from X parent and freeing up resources to be used again. But it might cause a crash or endless loop if there's a fatal error or vulnerability exploitation in the application, which means it becomes necessary to use SIG kill instead of sick Tom, which immediately kills the process. Stop signal allows you to replace that before sick Tom with the signal you desire to provide. In other cases, you might even have to use SIG. Us are one or six top, depending on the nature off your application were replacing sick dome with PSA kill in stop signal instruction. With that said, Let's save this file and exited. Let's build the image and name it flask up using docker. Build common. The building is done. Now Let's run the container out off it and call it flask. There we go. No, let's have a list of thes containers. 1st 1 is flask, and if you take a look at its status, it shows up and running along with healthy, which means the health check is being performed. If you want to verify whether the health check is correct or not, Local host on Port 5000 and there we go. It's the output off our flask application. Finally, let's stop the convener. When we list over containers again, we can see that flask has just stopped recently, but unlike other containers, it stopped with a record 137 which in terms off Lenox indicates exit court off the process , terminated by sick, ill or stop signal instruction, also worked correctly. It seems like we have achieved all of the three objectives off this lecture, so see you in the next one 19. Demo: Docker Hub Walk-through: it is about time to go beyond our little host machine and get to know the wide community off. Doctor. The best way to do so is to get started with Docker home. Get back to our web. Rosa Goto help that, dr dot com And where we land is the room pitch off, doctor, Huh? Dr Hub is a cloud based service hosted by Dr Itself, which allows you to build, link and manage your doctor images. It also provides some off the production great useful features like automated build. Just for your information, the auto build it that we used in our previous section where we did not provide any name off file while passing the bill Common. And yet, Dr Build the Content Off Docker file is also hosted by back and service off Doctor Hub To access its provisions in first, we need to create an account which is totally free, and all it needs is a generic set off data like user name, email, I D and password. Once we have added that, let's agree to the terms and services and prove that we are not robots. After this step, you should receive an email on the idea that you provided and you should click on the activation link. I mean, that's obvious, right? Once you have activated your account, you will land on a page. We should look similar to this one. It is card the dashboard. It displays your user name and provide links to almost everything that you might want to do on Dr Hubbert. First of all, we are on the repository stack where you can explore the globally available repositories or create one by yourself. You can also create an organization which SOS as a unit off people management about reposed themselves. It is useful if you are not an individual, but you're acting for an organization or on behalf, often organization. And since we have not created any polls yet, we don't have any start repose our contributions in general. On the panel about these steps, we have a few lings. First off them takes you to dashboard where we already are so clicking on it will be pretty much pointless. By clicking on the Explorer option, we get a whole new world off popular repositories created by individuals and organizations around the world. To be honest, one of the aspects, which makes doctors so popular and loved Among the ingenious, is toe enormous contribution by the community in such a short time, and the fact that Dr acknowledges its importance and provides one place toe access it. All these reports are ordered by the number of pull stay have received, and our Engine X, which was used in our first hour container of discourse, is on the top of the list organization option provides us another willing to stuff regarding organizations and create menu provides us a list off options where we can create either repo organization on an automated bill. An automated build can be created by providing bill context, which is generally a repository containing the docker file named Docker File on your host machine. In other words, it is the Web version off the short docker bill Common that we have been using in previous section. Since it is the Web version, we have to use a court and version management service like get her orbit pocket, and finally, we have a list off options for our own profile, where we can do some customization, like adding more information about ourselves, changing passwords, getting some kind of help our most importantly the documentation. In next videos, we'll understand Dr Images with greater depth and work with them. 20. Understanding Docker Images: we have already studied and worked with Dr File. It's time to focus on docker images, as we have seen previously. A docker image is a collection or stack off layers, which are created from sequential instructions on a doctor filing. The layers are read only, although there is an exception off the top most layer, which is read, write type. But we will get into that later. The doctor images can be recognized either by their unique image i D, which is provided by DR or a convenient name or tag, which exploited by us, means users. Finally, they can be pushed or pulled from Docker Hub, which we just visited in the last demo. If we want to visualize the layers off a docker image, they would stack up like this. We start with the boot file system, which is pretty much similar to Lennox's own boot file system. It is an arrangement off See group name, spaces and resource, a location which virtually separates the image from rest of the files on the host or cloud . On top of that, we would help based image layer, which along with the layers about it, will follow the file mapping laid out by boot file system. Leah. Next, we have layers such as work directory, environmental variables. Ad copy exposed. CMD etcetera. Speaking off intermediate images. Here are a few points to remember. First of all, as we have mentioned earlier, intermediate images are created out off individual docker file instructions, and they act as layers off mean image or result in image. All of these intermediate images are read only. So once the image is built, these layers will not accept any change whatsoever. They have separate image idea off their own, which can be viewed using doctor history Command. If you're wondering, why does a doctor have existence off intermediate images in the first place? It is for cashing. For example, if you're building two different images from the same base image like Engine X and Apache on top off open toe, the base image layer will only be downloaded once and will be the used when it is the same . To make this cashing simpler, we have intermediate images where each layer has its own significant identity, and it separates itself from all other layers in terms off usability. But the intermediate images may not be used on their own, since they would not be sufficient to run a container process by themselves. For example, even the smallest image would consist off at least one base image, and one seem the entry point instruction. Finally, they're stacked as a loosely collective read only layer by a U. F. S, which is a union file system. 21. Demo: Working with Docker Images | Search, List, Push, Pull and Tag: First of all, we have Dr Search Command. It is used to search images from Docker home just to clarify, you don't need to have a doctor help account to search Reports from your host are even pulling them. It is just a requirement to use the Web interface off doctor, huh? Or for pushing repositories on it. As for the same tax off this command, the freeze doctor search is followed by the name off the image. An optional version number After Colon. Let's execute this command. Here we get a list off fightin images sorted by the number of stars. Of course, many off them are frameworks built on top of fightin. Since beytin would be one of the key words, there are description off images to provide more brief inside and a check off whether the image is official or not. Here, the first image has the most stars, and it is also the official image. Next, we have quite a special case. Doctors Search registry command gives official image off Docker registry from Dr Hub. If we don't want to get such a long list off repositories, we can also put filters on our search here we hope Put freely there is hyphen official equals True, which will only show us official images. There we go. We only got one image sweet, right for those who like their results need en tidy. Doctor also lets you format the results off the search. Here the format is mentioned in double inverted commas and it starts with the keyword table , which means we want a tabular format. Then we have entered the desired feels that we want. The fields are mentioned in double curly braces and they're separated by back slash D, which times for tab. What space character? You might have guessed by now that this will create three columns, one off each field. Now that the predictions and wish lists are done, there's under command. There we go, are crowded, little table is here and it is showing the same repositories as before. Just in visually different format. Also noticed that we only helped three fields that we had mentioned in the command and rest of the fields are skipped. Moving on from doctor search, we held Docker images command. It is a shorter version off docker images, a less common and both off them do exactly the same thing which is list out the images on your host. As you can see, these are the images that we built during our previous section. On the other hand, if we want to list out versions or instances off particular type of image, we can mention the image name followed by Docker Images Command. Let's try and list all our open toe images here. We can also see the size of the image, which denotes the size Dick currently occupy on the storage off host machine Off course specifying the version number preceded by a Kahlan narrows down the list just to one entry . Furthermore, if we want to see the full parts off truncated data like image I d. We can use hyphen, hyphen, no hyphen, trunk, flag as well. But be cautious while using it, since it can make the results messy, Really messy. Then we held docker. Pull it, Busta specified image from doctor huh Door knocker host. Here we have provided engine X with Colin latest attack. So which our image will have the latest tag on Docker hubs and the next repository will be pulled. As you can see, it has downloaded a newer version off Engine X, which is latest instead off latest. If we use engine X colon, Alpine doctor, Hubble provide an image with alpine tag. Now, if we grab a list off available engine X images on our host, we get too often. First is the Alpine one, which we just pull, and second is the latest version, as you can see both off them very majorly. In terms off size, Alpine is like minimal engine X image, which is smaller in terms of size, since Alpine as the basis itself is smaller. Finally, if we want all variants off engine X images, say, for testing purpose, we can hit the command with hyphen, hyphen, all tax flag, and we will receive the missing images from the repository once we list the engine X images . Now it is clearly visible that these are different versions but different sizes. We're back to our doctor Hub cash port. Let's click on create repository option so we can make a repo and push images to it on the left pane. Docker is generous enough to list up the steps to create a repo. First off, all were supposed to provide a name space for our repositories so that we don't have to make the name unique across the globe. Generally name space is same as the user name. Now let's name or repository. We're naming it. Report hyphen, Engine X. You can name it. Anything you like. Next step is the description off the people here. As you can see, we have given a short and sweet description about the city pool. If you want to describe your report in much more detail, you can jump to the full description section off this report. And in the final step, we can set the visibility permission for our report story. Dr. Offers one free private report An unlimited public report with free doctor have account. So do your choices wisely. We don't need private reports for now, so we will select the public visibility for this people. Now let's create the report by pressing create button at the end Off the page, we have successfully created our report Engine X, as we can see that there are some taps above the short description off the repo. 1st 1 is report in four tab. It displays the basic information about our repo engine X, such as it's visibility, which is public and short description about it. 2nd 1 is dags. You can add multiple images under a single people separated by different tags. If you do not specify any tack for the image it will buy before take latest attack. 3rd 1 is collaborators. It consists off a user or a list off user whom the owner off the private report wants to grant the read, write or admin access next, and the 4th 1 is Web Hooks. Web Hook is a http callback post request. It can be used to notify user services or other applications about the newly pushed image to the report. Last one is the settings off the repo here user can change the visibility permission off the report and can also delete the report from users. Talker help account permanently now. As you can see, you can pull the images available under the report Engine X repository. By using the specific docker, pull common doctor pull civilian canvas slash Report hyphen engine X and store them on your machines. Since this is your first ever repository created on Docker hub, let's self indulge ourselves by giving it a star. Starting the people is a way to show that you like the repository and you can remember it for your future references. Now let's switch back to the terminal before pushing an image to docker registry. We need to log in again to Dr Help using Docker Log in Command Interactive Lee Here we have been asked to enter our doctor Hub log in credentials. We will enter a user name, which is truly in canvas, and it's password we have successfully log in our account with a warning with says that our doctor have password is stored unencrypted in conflict dot Jason file on our machine for future references here. Okay, voted for now. So we will ignore the warning and proceed to the next step. Now we will attack a local image Engine X, Kahlan latest into a new image. We will specify where we want to push this image. We can write the host name on which the registry is hosting, which is civilian canvas for us. Now we'll mention the registry name in which we want to push the image that is repo hyphen Engine X. You want to give your own custom tacked to the image, such as CC hyphen engine X, for this example are If you don't mention any tag for the image, it will take latest by default. This two stage format is meditated to Bush, an image to a public repository. Now let's check out or newly tag image by listing all images on our machine. Dad, you are. We have original engine. Next latest image and newly tacked civilian canvas Slash Report. Hyphen, engine X Colin CC engine X image. But did you notice something? These two images have the same image I d. It is because Dr Tack Common has created an alias for your image as its new image name so that the original image will be untouched and all off its changes can be performed to the new earliest image. Now let's push the civilian canvas slash Report Hyphen Engine X Colin CC engine xto. Our report Engine X using docker push common. We have already specified the part for the destination location in image name. As we can see, Doctor is pushing each layer off the original latest image. Actus end. On the other hand, docker demon with stack all of the's layers sequentially and create a new image with the tag CC Engine X in the report Engine X At the end off the process, we got a new image digest. Identify off the push image. Now let's switch back to Dr Help account to verify that our report has been successfully pushed. Who will navigate to the report and the next repository Go toe tags and we have successfully pushed the image, Image tag, size and a belated name are mentioned here. In next lecture, we will dig deeper into the image by inspecting it and looking at its history. 22. Demo: Know your Docker Image | Inspect and History: as we know that Docker Images Command will list out all of the docker images stored in our machine with some basic information such as Image I D Repository, name and image Tak Tau Identify different images. But what if we want to know more about any particular image? Well, for that, we held doctor inspect command doctor, inspect common returns information about every single doctor object who has contributed in the creation off a particular docker image, which can be very useful at the time of debugging. Let's list out all of the open toe images available on our local machine. My writing command Dr Images Open to and we are. We have four open toe images with different image tags under open to repository. Let's inspect open toe Colin latest docker image type docker image. Inspect command followed by the image name that you want to inspect. We will type woman to Colin latest here, press enter, and as you can see, it has displayed the detail information about the latest woman to image in Jason Terry. Here we can see the extended image I D off open to latest followed by report, name and report. I chest which is the 64 digit hex number. Next, we help container identify. Don't confuse it with the containers running who want to image. It is the intermediate container which doctor has created while building the open toe image from docker. File. Container Conflict is the configuration details about the same intermediate container, which is stored as images. Meta leader for reference. Next is the information related to scratch image and its architecture, which is used as the base image here. It also mentions the actual and virtual size off the final image. And at last we have Root FS identify, which shows digest off all and immediately us off this image. If you want to access a specific detail about an image you conform at the output off. Doctor Inspect Common type Doctor Inspect, followed by the former tag Freud Arguments to format flag between inverted commas, report, tax and report. I just separated by Kahlan at last type docker image, name, press enter and as a result, we got the report back and report I just off woman to latest. We can also see of the inspect reserves, often image to a file in Jason format for future references here. We want to store the configuration details about this image in a text file. To do so. Type Docker image Inspect format followed by Jason Not conflict in double inverted commas and curly braces who want to and store the result In inspect underscore report underscore open toe dot txt file. It is just a name that we have given to the file. You can give any name which you want. List out all of the available files. Inspect report. Open toe has been successfully created. Let's check out the contents off this file. Conflict. Details about latest open toe image is available in the text file. If you remember root efforts, identify in the inspection off, open to latest image showed only that digest off all intermediate Leah's in the image based on only digests. It is difficult to determine how the image was built. For that we have darker history. Command Docker History will show us all the intermediate layers, often image. Let's find out the intermediate layers off this image type docker image History who went toe in terminal? We got all the intermediate Leah's for our latest open toe image. These layers are stacked sequence chili starting from the base image at the bottom to the CMD layer at the top. Off the results. All the layers have their associative image, ID's sizes and their creation. Time to dig deeper into this. Let us find history off one off the image which we have built on our local doctor host, who will find history off i. M g Underscore Apache Now type docker image history, followed by the image name, which is I am G Underscore Apache and press enter. You might be wondering why some off the rose off image column in both the reserves contained missing and some off them have their image. I ds. As you may remember, the intermediate image ideas are given to the layers created by DR Five Instructions, and they can be used for cashing purpose by our own Dr Host. But if a images pulled from Docker Hub, such cashing would not happen, and since it may cause environmental clashes, so we are not provided any image ideas for intermediate Leah's off pulled images, All we can know is they exist. We have two types off intermediate images which are easy to distinguish one which are built by some other doctor host, and we have just used it as base image and the ones which are committed by our instructions . You can also identified them by the time they were committed the base image immediately. US have 17 months old, whereas the other ones are committed just a few hours ago. 23. Demo: Clean up Docker Images: having unnecessary images lying around our host can be quite a border. Firstly, it consumes a lot off disk space and having multiple version off similar images can cause confusions nonetheless. Let's list out or available images. Just take a look. The list is already exhaustive. Time to narrow it down a bit to keep things neat and tidy. First, let's use our, um or remove command. We will remove an image with one hyphen alpine pull tag. As you may remember, these images were pulled as a stack off layered intermediate images, so they will also be removed. Similarly, all of the intermediate images along with the resulting image will be removed from our host just to verify. How did our command do? Let's get another list of images and we shouldn't find any image with one hyphen Alpine pull attack. Another way to write image RM is to simply write at M I and follow it by image i d. When views image I d. Instead of image tag, all images containing that I d will be removed here. One hyphen, alpine and Alpine variants off engine X image will be affected by this command On the other hand Such an operation involving I D off the image, which is used more than once, cannot be performed normally. That's why we're getting this error and the suggestion to remove them forcefully. Let's do so. We will use the same command with four stack as you may notice all of the images. With this, I d will be freed from their tag and they will be removed along with the intermediate images. 24. A Container is born!: we are done with both Docker file and Docker images, So now it is time to pay our much needed attention to extend the point off the scores. Cadenas We have already seen the formal definition off containers, but if we consider our updated knowledge, the simplest way to describe contain a would be are running instance off a docker image, you can compare it to the analogy off process and program In Lennar's, just like a process is a running instance. Off a program. A container is a running instance often image with help off name spaces on the Lennox. Host containers provide similar isolations. Like we, um, each container has its own file system, network driver, storage driver and administrative privileges as well. Despite off all of this, any container would be at least 100 times lighter than the Williams hosting the same set of Softwares we have seen previously that docker images are made off. Read only layers, and the top most layer is right herbal. Well, this top layer is roided. Do it while creating a container out off the image with correct network configurations. Containers can also talk to each other. Why I peace or DNS. It also follows copy on write policy to maintain the integrity off the docker image, which we will explore soon. You may wonder what exactly do we mean by running the image? Well, much less to the surprise run can be defined pretty simply. In our context, it means writing resource is like compute memory and storage. 25. Container Life-cycle: Ah, containers. Lifecycle s pretty much similar toe A processes life cycle in Lenox because after all, a container is just a running process. Instance off a doctor image. We start with the created states which can be a part off doctor run command or can be explicitly caused by Dr Create Command. If it is a part off run command, it will automatically lead to the next stage which is running state. It means that created container or the shed yule process is running and re sources are being actively used by it. Alternatively, if a container is explicitly in created stage, it can be sent to running state with start. Come on. Next is bossed stage which won't occur on its own. For the most part, you can strategically cause it with docker container Pause command and resume its similarly with a NPAs command to contain a process will goto pending states and once resumed, it will be back to being up and running. Next is stopped stage, which means the process off the container is terminated. But the container i d still exists so it can be re shield without creating another convenor and registering its I D. This can be due to multiple reason it can be caused by an era restart policy or simply container. Having finished its run to completion tasks, we can manually stop and restart containers with docker containers, stop and restart commands, respectively. Finally, we have deleted stage where the terminated container is removed and its i d is freed up. It will stop appealing in the list of containers to expand further on multiple containers from single image. Considered this Bagram, the read only Leah is common, and the read write layers are fetching data from it. This does not cause any data corruption. Since the data off read only layer is not going to be modified in the first place, and the system just has to perform multiple read operation on the same data. This optimizes storage off doctor host. Where is the number off running containers from the same or different image on a single host will always depend on hosts architecture limitations like memory and processing speed . Another important aspect of containers is their copy on write mechanism. What's that? Well, it's pretty simple deal. Now we have seen that credible layer off container is mounted to the read rightly off Docker image. Well, that was true, but it has a little secret to it to read. Only layers filed themselves are untouched. Ah, copy off them is created and read rightly is mounted on that copy, which makes it easier to recover the layers in case any unauthorized host file system access or condom damage. 26. Demo: Container Run Vs Create: Let's test out both of these commands with a busy box container. First, we will use docker container. Create command. It is followed by hyphen I D tag, which means it will be interactive and teletype enabled. We haven't given it detached flag since we don't need toe, we're naming or container cc hyphen. Busy box A and we're using image busy box with latest attack when we run the command. Since the content off the image is not available locally, it will be pulled from the doctor hub once it is pulled. What you see at the end is the unique container I d created by Dr. The idea is unique at least across the host and cluster. If you're running any now or container should be created to list Argentina's, we have to run the command docker ps hyphen A And once we do so we get a list off. All the containers which are running are about to run on have finished running on this Horst. The output layer is fairly simple and the top most entry is our recently created container . It is not in the running state yet, which can also be verified from the Status column. It is followed by quite a few other containers which have finished running and have exited some time ago. Here the resources are already ready to be allotted to the container, but haven't been allotted yet. Don't worry. We'll let this container enjoy its dream run as well. But before that, let's see what happens when we run a container. Instead, you might find this command similar to what we have used in some of our initial demos. It is because this is the most to mainstream way to run it this time. We also put that the flag so we don't have to dive into the container. And we have named it cc busy box B. Since we had already pulled the busy box image last time, Doctor has cashed entirety off it and has simply returned a container. I d If you're wondering, why do we have an R M flag tagging along? It instructs Docker to delete this convener after it has finished running. Let's check out Dr P s hyphen e again and what we see is our top entry replaced by busy box be container. Unlike its counterpart called busy box A This one is earning for six seconds. In fact, there is also a three second difference between its creation time and just running time. You can assume that doctor took that time to allocate the resources and register it as a process to its host. Since we have our containers running, we'll play with it a bit more in the next lecture. 27. Demo: Working with Containers | Start, Stop, Restart and Rename: Let's start over Demo, where we have ended the previous one. The list off containers is still the same just the time duration has updated. In previous demo, we had created the container called CC Busy box A, but we did not run it now to send it into running state. Let's use Docker container Start Command, followed by the name of two container. We don't have to provide flags like ideally, since they have already been passed during the create command. Let's run it. We won't even get a container. I d here. Since that two had been generated previously, all we will get is the name off the container as a nod to success off the command in typical doctor See lifestyle. Time to get repetitive and list out the containers again using docker PS hyphen A. And we haven't update are created. Container cc Busy box A is now finally in running state, just like start. We also have a command to stop the containers. Since a has just started running, let's stop cc busy box B again. Ah, confirmation signal is the name of the container, and if you want to verify it, let's list our conveners again and wait, where is RCC? Busy box beat? Does that mean that isn't ETA? Well, no. If you remember, we had applied. Aflac called Adam in our last demo with Docker run common on Sisi Busy Box Beacon Dana, which meant that the container will be deleted once in once it has stopped running. To use this simple. If you want to reuse the container, keep it. If you don't want to use it, remove it and free up somebody. Sources. Next we held a restart command. Let's restart our CC busy box, a container. We'll also give it a buffer of five seconds. And when we verify it, what we get is a freshly started container up and running. Finally, I think all of us would agree that Sisi busy Box A was not that great off a naming convention to follow. It's just Lindy or complicated and bland. If you encounter such thoughts with Jurgen Deena's, we have a command to rename them. Let's be a bit more casual and rename cc hyphen, busy box hyphen A as my hyphen busy box, and when we list them, we can see the change is reflected by the way notice that the container has just been renamed, not restarted, which means we can rename them almost whenever we want unless they affect some other containers. In next lecture, we will do something more application related with our containers. 28. Demo: Working with Containers | Attach and Exec: just like previous demos. We have a list of container here Now let's use docker container attached command. It means that we are attaching the standard Io and standard error off or container to the terminal off our doctor client. We have attached my busy box container here, so let's hit enter. As you can see now we are accessing standard. I owe a terminal off busy box from our open to terminal. If we hit a less, we will see a list of available directories in busy box route environment. We can play around a bit more to navigate to other directors as well. If we exit it written back door open to host terminal and there is an interesting aspect to the attached command. When we list the containers again, we can see that my busy box container is not running. It has exited a few seconds ago. In other words, attaching the container conditions it to be stopped when we exit the attachment. An alternative to this is Dr Executor meant it allows us to use any command we want and it executes two container. But before let's start our container again. Now we have used Doctor exact, which stands for Execute with hyphen i d. Flag on how directed it to run and print the result off PWD Command. Once it succeeds, we held a forward slash, which indicates route off our busy box. Unlike attach. If we list the containers again, we'll find or containers still up and running. 29. Demo: Inspect and Commit Container: it is time to know our containers in greater depth. First, we have the list of containers just to avoid any confusion. We have run an open toe container after the context off. Last demo. Let's get more information about it with Doctor Inspect Command followed by the container name. What we get as output is the Jason description off the container. We don't need to be intimidated by the sheer amount of information as well. Interpret them one by one. Starting from the top, we have container I d provided by Docker Timestamp off container creation part where the container is running. No arguments since we haven't provided any in the state backing off the container. We have indications off the fact that over container is in running state and not paused or restarting our debt, then it has not been killed by going out off memory. It's process I d on you. Bento is 6 94 Then we have information about the image in Terms Off Image Digest and we have various parts such as host, part log, part and configuration part. Then we have another bunch of information where most off it is irrelevant to this particular container so they're either null are empty. But which do matter are the name off the container and the fact that it has not restarted yet. Following this? We also have network volume and other information which might be useful to you once we proceed further in this course. For now, we can focus on finding specific information from the Inspect Command. Since even if you get completely familiar with all the attributes, reading them every time can be really daunting. Let's use format flag with her, inspect command and narrow around the results to just i p address. We can do this by narrowing the range to networks and under network settings. Choosing the I P address field. There we go. We have the I P address off our container. Next is the Summit Command. To effectively use that command, we need to make at least one change to the container state after it is created from the image. Just to remind you, this urban two container is created from the same image that we had pushed on our doctor Hub Repo. Let's execute it with bash. You should already be used to this command by now. Let's verified by listing or the directories. Yes, we are in the container. Now Let's run an update. The purpose here is just to change the state off. Contain A from men. It was created once the A plate is complete. Let's exit it now let's use Dr Commit Command, followed by the container name, which is my open to and update name in the format Off Doctor Hump People images. We have kept it as updated Cuban to 1.0. Once we entered it, the oblate will be committed to our doctor Hungry people. As you may have guessed, it is essential for us to be loved into our doctor. Have account to use this demo. The updated container is committed as the image as it's read, write lier turns read only and is stacked on top of previously us off the former image. So instead of containers, if we list out the images, we can find the updated one, which can be directly then as a container, and we won't have to read under a great command. This helps maintaining diversions off docker images. In next lecture, who will learn about port mapping 30. Demo: Container Exposure | Container Port-mapping: in this demo will map our host machines sport to contain a port. The command is fairly simple, as we just have to extend the run command with a flag. We'll map our hosts. Port 80802 Container Sport 80 on TCP by mentioning it following hyphen. B notice that the imagery used here is the one that we had created while working with exposed instruction. Now, when we run the containers, we'll get the ports mentioned in the airport. The output looks a bit messy, but the annotations should help here. Now we'll create another grand dinner from the same image called con engine X hyphen. A. Instead of providing ports and protocols like earlier this time, will just provide capital hyphen p and allow docker to map ports by itself. Here it will use the information provided by exposed instruction in the docker file and tally available pours from host machines network drivers. We can see that the new container has port 80 mapped from container to port, 32,768 off host. We can also view this information by hitting Docker Container Port Command, followed by the container name. Finally, when we learned local host on Port 8080 on our Web browser, we can see Engine Ex home page, which indicates that our port mapping was successful. When we do the same with the other container, it shows the same thing as well. In next lecture will clean up or workspace. 31. Demo: Container clean-up | Prune and Remove: So in this demo we will learn different ways of removing the containers. Let's list out all off the organ donors. And, yes, there are quite a lot off them. In fact, many off them are not even that's significant at the moment and should be removed first. We held a basic Adam Command, followed by a containers name. Here we have preferred a stopped container. Kant underscore from once. It is a mood it will disappear from the list. Then we have the same Adam Command. But instead of providing name, we have provided the container I DS off the stop containers, and the result is the same. They disappear from the list after being removed. The case will be a bit different with the running containers. Just to make sure that we are not making any mistakes while the leading running container it asks us to provide the forced demolition flag, I would say it is a kind gesture, since it avoids potential unforced errors. As we add the forced flag, nothing can stop us from removing it. If we want to be kind to contain us and want to kill them properly, we can send the sick dome signal using docker container Kill command. But as you can see, we still have quite a few containers turning and we don't need to stop ones for the most part, to remove the stop containers, we have a command car docker container Prune. It is a short and sweet common and doesn't require any names. Our i ds. It will simply kill all of the dangling containers and free up whatever resource is it can . We had three off such containers which got removed, and we got 1.8 megabytes off free space. Finally, our list off containers only contained the lie ones in next model will go deeper into networking. 32. Multi-container Applications and Introduction to Networking in Docker: till now we have played with single containers in are Yours. But even if we did use more than one containers, they were completely independent off each other. For example, one container might be enough to host a static landing page, but a smartphone app would definitely require more than one containers, but each of them may self specific purpose. In such a case, information exchange between containers become a crucial factor off overall performance off the application. In other words, they need to talk. The communication can be 1 to 11 too many or many to many. In case of docker containers, These communications are managed by objects called network drivers to define them simply a doctor Network driver is a piece off software which handles container networking. They can be created simply using DOCKER Network Command. No images or files are required. Speaking off networks these networks can spawn from single host instances to multi host clusters. For now, we'll focus on single host and we will visit cluster networking and we deal with Docker Swarm. Dr. Network drivers are quite reliable, since DACA itself uses them to communicate with other containers and outside world. This also means that Dr itself provides some native network drivers. If we don't want the water creating ones by ourselves as a trade off, it means less control or I p ranges and ports, apart from the networks we create and the default ones. Doctor also supports remote network drivers, which are developed by third party and can be installed as plug ins, although they are still quite under growing states. And mostly they're useful for specific use cases like enabling networking on a certain cloud provider apart from network drivers. Doctor also Floyd's I. Pam R. I. P. And Tress Management Driver, which handles I P address ranges and distributions if they are not specified by the admin. I know you have loads of questions like, How do these networks work? Are there any types? Is there any structure which they follow? Well, we will explore all of the's details in next lectures when we study container networking model and types of doctor networks 33. Container Networking Model (CNM) of Docker: Let's dig deep into container networking model. First of all, we have host network infrastructure. This include both software and hardware infrastructure details like using Eternity or WiFi and host oils. Colonel and Work Stack in our case, Lennox Network Stack. On top of that, we have Dr networking drivers, which include Network and I Pam drivers. We just recently stated that functionality briefly in last structure. On top of these drivers, we held docker engine, which creates individual network objects, as you might have guessed user defined on before containing metal objects fall on top off docker engine. Since their provision by it, these blocks are a part of DR itself. On top of for container network, we held running containers which are accompanied by at least one endpoint. I said at least one because it is normal for container to help connected to two or more networks and hence consisting off Morton. One endpoints speaking off endpoints. Their container side connected representation off virtual Internet, which is the common protocol for networking across darker. They contain networking information such as I P address, virtual physical Andress and ports, as mentioned earlier. If a container is connected to more than one networks. It will have more than one corresponding endpoints, which will contain different I P's. The scope off these I peas would typically be limited to the host in case off single host implementation within the same scope. If two containers are connected to the same network, they can also communicate wire. DNS were container names can be used instead of I P's container networks. Ploy this information to network and I, Pam drivers, then network and IBM drivers translate these requests into host network supported packets and transmit them to make sure containers can communicate to the outside world. Because if that doesn't happen, forget engine X. You wouldn't even be able to execute after get update command properly. So this is how container network model works. In next lecture, we will look at network driver types in detail. 34. Docker's Native Network Drivers: out off NATO and remote network drivers were going toe work on native drivers. Native docket BET. Truck drivers are used in creation, off default on user defined networks. Do you remember this diagram from previous lecture? Let's shrink it a bit for the convenience now that's considered the first type of network the host network. The idea is pretty vanilla here. Network credentials off the host are directly reflected on the container and point, which means containers connected to this network will help the same i p as the host itself . This doesn't mean that container with abandon, their true nature toe getting a bit more practical. Let's say we helped to containers connected to the default or user defined host network. In this case, both containers will communicate where virtual Internet, reflecting the capabilities and limitations off the host machine. Moving on from host, we helped bridge network. It is also the deformed network for docker containers. If we don't explicitly connect or containers to any network, they will be connected to the Default bridge network. The name off this network helps a lot in defining its properties. It creates a virtual eternity bridge all of the containers connected to this network are connected to this bridge, where container and points the bridge communicates to the host network. It means that the containers will be isolated from host network specifications. Containers will have different eyepiece than host. We can define the I P Range and submit mosque for the bridge and subsequent networks. But if we choose to opt out from this decision, I Pam drivers managed this task for us. We can think our address these containers using the I p exploited by the virtual bridge. Off course. The communication will pass through the host machines Network means if it is down, but it won't be able to do much amounted. But this can help us hiding the DNS or i p off the host in recent version off Docker E 17 and about. We can also use container names toe addis them when we're communicating within the same doctor, Bridge network will practically explore these networks more in dem electors. Further, we have overlay networks in case off all a network. We do need to come out of the cocoon off single host locker infrastructure in industrial usage off docker community, our Enterprise edition. You will most likely find cluster or clusters off docker host, which will run single, connected, or at least the relevant set of containerized applications. Such an arrangement is called Swarm, Moored in Docker. Swarm heavily relies on oil in a truck provisioning off darker. We're yet took over swarm in our course. But do not worry. This explanation will not flood you with unknown form terminologies in case off bridge network. All we had to worry about was containers I P. Since we had only one host. But with all the network will have multiple host having multiple containers where any combination off communication might be necessary. So while establishing or performing container to container communication, our network driver can't get away by just keeping track off containers. I p. It also needs to shout its communication to the proper host. To solve this overlay network will help two layers off information underlay network information which will contain data regarding source and destination off horse. I'd be and overly information Lear, which will contain data about source and destination containers. I p. As a result, the communication packet header will consist off. I p addresses off both source and destination hosts and containers. If you look into it practically when we introduce warm 35. Demo: Create Docker Networks: in this demo, we will create our first Doctor network and understand it. We will do it by using doctor network, create command and furnish it with driver flag. Our driver for this demo is a bridge network. So we will pass the argument bridge and finally we'll give it a suitable name. My bridge. What we get as a result is an i d for the network object which has been created. Now before we dig deep into my bridge, let's create another network called my Bridge. One will provide a few more perimeters with this one for better compassion. Apart from the previously provided flag driver on its Value bridge, we have also provided the sub net and I'd be range again. We received another I D. Let's list these networks out. As you can see, my bridge and my bridge one are not the only available networks on the list. That is because Dr Roy's us A. Set off default, created networks using different network drivers hit the our bridge host and none you can tell by the names that bridge and host are using corresponding network drivers. None is a special case toe. It is used to indicate your isolation and lack of connectivity. We can also filter the search by providing the filter tag. Let's put the filter that we only desire bridge network so the driver field will be said to bridge and here we have all the networks created with bridge network driver. 36. Demo: Working with Docker Networks | Connect, Disconnect, Inspect & Clean: In this demo, we will connect one off or containers with one off the networks that we have created. First of all, let's see if we have any running containers. The container should be interning state, since network object connectivity in Docker follows the rules off inter process communication in Lenox, which means if there is no process, nothing can talk to it in terms off networks. As we can see, we have to off our spare containers from previous model, but both off them on an exit state. Let's start my Cuban to contain a Now keep a list off networks in front off us to make better decisions. We will use Docker Network Connect Command, followed by network name and container name and hit Enter. We don't get any sort off response like network I D or container I d. From Docker. So a fair way to verify the connection would be to use talker. Inspect command after using inspect on my open to If you navigate to the networking fields off the output, you can see that we have description off bridge network, my bridge one attached to my open toe container. And it also has the alias, which is same as the one we had received after creation off tat bridge network. You can also notice the end point, which is described with an endpoint i d and the one next. Come on. Instead, off using a separate command to connect the Doctor Network. You will mention it along with the Run command using network flag. Here we are providing host network to the container name Kant. Underscore Engine X, which will be created from engine next image. Having the latest tag, notably if you run Docker Container Port Command with corn underscore engine X. You won't receive the port mapping information since no port mapping takes place with host network driver container communicates to Internet using port off host itself We can you more information about this horse network using inspect command on container. And as you can see, we can get network I D and endpoint details off the host network instance. Just like in previous container Here, too, you can notice a field named bridge under network settings. This field is empty. The reason is, if we do not provide any network manually, Dr Price, the default president, work to every container. No Let's inspect the default bridge network. It seems that it, too, has its end point. Submit, and I'd address range. Now, if we look at the containers field, we will find my open toe or, to be precise, only my open toe. The reason why corn underscore Engine X is not listed here is that it is connected to the host network, Dr The Next a container toe, one off the D Ford networks. And mostly the priority is bridge. Unless we mentioned otherwise. Explicitly, not the I P address off my open toe under D for Bridge Network, which is 172.17 dot zero dot to. Now let's inspect user defined bridge network in our case, my Bridge one network. It has similar parameters compared to default bridge. Apart from different endpoint, I be Range and I ds. It also has my open toe container connected to it. But the I P is different from the default bridge. In other words, my open toe container can be accessed from both the networks using corresponding eyepiece. We can also format the output off, inspect command like we used to do it. Previously, let's grab the value off scope field off the Fort Bridge network are we can grab a set off i D and name for the same as it is visible in the output. The first entry is the network I D. And the 2nd 1 followed by a Kahlan, is the network name. Now let's list are containers again to see what to do next. Well, we can see what happens when we disconnect a network from Container. Let's use Doctor Network Disconnect Command, followed by network name and container name, which are my bridge one and my open toe. In this case. Finally, if we inspect our network, we can see that container My open toe, which was previously mentioned. There is successfully out of sight. Similarly, if we inspect the container, we won't find the user defined network eater. 37. Demo: Ping one Container from another: in this demo, we will finally see the results off our doctor. Networking Hustle. Starting off, Let's follow our standard practice off. Getting a list off. Doctor networks were quite clean. All we have our default host bridge and neural networks not discreet. A bridge network called Net Bridge and provided sub Net and I p. Ranges as mentioned in the command. One status done. Run a container called Kant underscored database from Reedus Image and connect it to Net British network. That's fetch its I p. Since we will be using it later in this demo i p off This container is 172.0 dot to 40 Talk one. Let's on another container from busy box image and call It's over. A. This one is also connected to net president work just like previous one. Now let's inspect own Net Bridge Network to find which containers are connected to it. There we go. Both corn database and silver A are connected just as we had expected for the more so is I . P is 172.20 to 40 dot to following the range which we had provided Run the third container also from the busy box image and call it server. Be notice that we have not mentioned any network whatsoever, which means it will be connected to the default bridge network. We can also verify it by inspecting its network information. And while we're at it, let's not its I P as well, which is 172.17 dot 0.3. Now let's switch to view a little. We help three terminals, which will be using for three different containers. If you don't want to go through all the trouble, you can use multiple terminals and keep on switching between them or can run them on multiple displays. However you feel comfortable. Let's execute gone database container with Bash common. Once we have navigated to the route off the container, let's start toe Ping Google. Oops! It seems like thing is not installed in the base image off readers. So let's go ahead and fix that. Ran a generic update and install Ping I P utility with this command. And once the installation is complete, next resume where we had passed the flow off this tutorial being Google. I love saying this being Google bing Google. There should be enough. That's block it with control, See? And what we see is successful being with no packet loss. Now, if you remember, we have noted the I p off all of the containers server is I be was 172.20 to 40 dot to Let's being that it was a success. It means to off all containers. Just talk to each other without any sort of packet loss. Since they're connected to the same bridge network, this communication was more or less I PC or inter process communication within the Lennox host. But considering the isolation they have opened, it can be treated like two ends. Often, application were communicating. Going further. Let's go on another terminal and execute server taken Dana Thing, Google and Cont database container from it. Both of them will be successful, since Bridge Network allows containers to communicate to external world using virtual Internet and containers connected to the same network can talk to each other using their endpoints. Lastly, let's run so RB container, which is connected to default. Prison network, not user defined. Net bridge one. If we try to open Google, it is a success. But if we try to make other containers, we would fail, since they are not connected to the default bridge at the moment. On the other hand, even if we use DNS names off the containers instead, off their eyepiece, containers connected to the same network will face notable swell pinging each other at all . This explains and demonstrates capacities and limitations off Origen it works. 38. Never lose a "bit" of your data!: - From 1/3 person's point of view, this may seem like a funny story, but it can potentially cost you your job. That's the prime reason why we need efficient stores solutions with containers. The logic is pretty simple. Containers data needs to be backed up somewhere as a permanent storage. And a quick question that will come up in your mind would be what on which details should be backed up. To answer that, we need to look back at the layered structure off docker image and container data. If you remember, we helped to types off layers, laid only layers, which hold permanent data and is never modified. Utopian right policy and read write layers, which hold temporary or wallet. I'll data if a container stops or dies the wallet. I'll day now vanishes. So now we have our answer. We need to back up the important data from the wallet. I'll read right lier off the container. Now. The next question is where to store the data? Well, just anywhere. Do you want to store it on some machine, which hosts Doctor? Go ahead. Do you want to store it to another server? Go ahead. Do you want to store it on a cloud, go ahead as well. And the last n genuine question Which comes to my mind. Is there any type off storage objects? Yes, there are most commonly used. College object type is called a docker volume in a volume. The container storage is completely isolated from the host file system, although the data off volume is sorted in a specific directory off the host, their controlled and managed by talker command line. Compared to other options off storage, which we will visit soon enough, volumes are more secure to ship and more reliable to operate. Let's understand volumes. Volumes are storage objects off docker, which are mounted two containers in terms off implementation volumes are dedicated Directories on hosts File system. If a containerized app is shipped along with the volume, people apart from the developer himself using the APP, will end up creating such a directory on their own. Doctor hosts Container provides data to docker engine and user, provides commands to store the data in the volume or to manage the data in the same. Although what container knows, it's just the name off the volume, not the part on the host. The translation takes place on docker machines, and so external applications having access to containers will have no means to access volumes directly. This isolation maintains the integrity and security off hosts and containers. Second option is buying moms. The exchange off information is pretty similar, apart from the fact that instead of creating a directory inspired by the name of volume buying mounts, allow us to use any directory on docker host to store the data. While this might be convenient in some cases, it also exposes the storage location off the container, which can make dense on the overall security off the application on the host itself. Apart from that, the other users, apart from developer himself, may not how such part on their host and creating so may not be under their privileges or comfort. Finally, we help them FS or temporary file system volumes and bind Mount let you share the files between host machine and container so that you can persist the data even after the container is stopped. If you're running Docker on Lennar's, you have 1/3 option. I m. F s moms, the nuclear a container with temper Fishman, the container can create files outside the containers rideable earlier as opposed to volumes and buying mounts. A temper fest Moan is temporary and only persists in the host memory, not in storage when the container stops, the temper FIS mount is the mood and the file certain there won't be persisted. The only sensible use case, which comes to my mind for 10 profess, is to store sensitive files, which you don't want to persist once the application gets deleted. Something like the browsing history, which get deleted if we use the incognito tab. Him profess mounts how their limitations they can be created, what ship? And they won't work on non Linux environments like Docker on Windows. 39. Demo: Working with Volumes | Create, List and Remove: in this demo, we are going to create a volume using doctor command line. Let's type the command doctor volume create, followed by the name off the volume. Here we are naming the volume wall hyphen, Busy box. Once the command succeeds, we get the name of the volume as the not off it being created. Before we do anything to this created volume, let's create another one. But this time, in a bit different way here, we're going to run a container using open toe image and we're going toe mount the volume wall hyphen, open toe on the containers D. M. P. Or Temp directory. Again, we will not do anything with this volume, since this demo primarily focuses on creation off the volumes. Now let's list the volumes to see what we have created. Let's type Dr William a. Less and as you can see, we help. Four volumes here do off them are created by us, whereas to off them are created by Dr using local volume driver, just like every other object which we helped created previously, like images, networks or containers. We can also filter the output off the L s common. Let's type Dr William LS. And put the filter off dangling equals. True, it means that it will list the volumes which are not being mounted toe any container here wall busy box has not been mounted to any container. Similarly, the one about it, which is provisioned by docker, is not being mounted, are used country. Also, we can inspect or volume just like every other object by using Dr William Inspect, followed by the volume name. And as you can see, we get the creation timestamp driver type labels which are not here Mount Point name off the volume and scope, which is local. Now let's try to remove one of the volumes which we have created. Type the command Dr William RM, followed by the volume name Hair were using volume wall hyphen open toe. As you can see, we get a Energis forms from Dr Demon. It says that this volume cannot be removed because it is in use, which means it has been mounted a container. So if we remove the volume, the container and it's performance will be affected. Let's get a list of containers to see which container is blocking our action off removing the volume. And as you can see, the tender noise container, which is built from the open to image just two minutes ago, has been mound with the volume wall hyphen open to. Although it has not been mentioned here, you can guess it since all the other containers are up for more than an hour ago. That's type the command docker container RM, followed by its name and tender Nice is removed. Now let's read on the command docker volume Adam Wall hyphen, Open toe. This time we didn't see any error and the volume should have been removed. Let's verify it by listing the volumes again. And yes, the wall open toe is not visible. 40. Demo: When Containers meet Volumes: in this demo, we're going to demonstrate the use off volumes which we have discussed in the theory. Let's start with creating the volume, which we had deleted in the last demo, which is one open to We'll do it by running a container from open to image called Kant hyphen Open toe. Let's see if both the volume and the container are available again. To remind you we can always check the container using docker container, inspect command and find the information about volume by formatting its output. As you can see, the container called Con Robin, too, has the volume wall open toe attached to it. Now let's execute organ. Dana Hendren Bash. Common on it You can notice that we are not executing it as a demon container, which means that once this command succeeds, we will jump right into the terminal off or container. Right now, this container is in its default state, which means that even if we delete it and spend it up again, nothing will change. So let's make a few changes to it, which will be reflected in its read Write top most layer, and if we delete the container, then the changes we have made now would be lost. The action can be pretty simple here. We don't need to do something so heavy, even a simple act off. Just updating the always can create enough changes to be recognized. So let's update this open to by typing app. Get a great command once it is abated. Let's change our working directory. You are hyphen log. As you may have guessed, this is the directory where open toe is keeping its logs. Let's list out the available files, and we have a lot of log files here. The purpose off doing so is to make sure that once we stop the convenor, we should be able to see the same files as back up on our host machine. And the reason for that is when we created this container, we had mount this directory to our host using the volume wall hyphen open toe. Let's exit the process and stop the container. No, let's have the root privileges on our host machine. And as you can see, we are on the same working directory just with root privileges. Now, as we have seen in the Terri section off volumes Docker stores the backup off volume data underwear, hyphen, lip hyphen, docker, hyphen, volumes directory. So lets navigate through it. And let's list out the content in this directory. As you can see, we have directories off all off the volumes created by local volume driver. Now let's navigate the wall hyphen open toe to see if the changes in the log file are reflected. Once we are in wall hyphen, open to directory, let's see its contents. And what we have is a data directory. Once we navigate toe that enlist its contents, what we see is a long list off log files, which means that the mounting off volume with the container was successful. So this is how we mount a volume Tokcan Dana and create a backup off its data to host using local volume driver. 41. Demo: Working with Bind Mounts: in this demo, we will test the bind. Moms. Let's create a directory called Bind Data on Our Doctor Hosts Home Directory. Now run a container called Bind You Bento from open to latest image and bind. It's empty or temp directory to the newly created Bind Data Directory using bind mount as usual, Let's see if the container is running. Yes, it is. Now it is time to inspect the bind Mount information and we have Mountain type, which is buying mount along with source and destination parts, which are just as we had provided them. For the more we have read, write permission set up toe. True, which means changes in the files will reflect on both the sides. It is probably the least secure way to mount a container data toe persistent storage, but for now it works. Lastly, we have bind propagation. It is an interesting aspect. Buying propagation is a policy which determines the bilateral access to the directories created within the Mount Point source and destinations. In other words, it will decide whether subdirectories off mount will be associated with amount or not are private is the default value, which means that any subdirectory within source or destination off MT. Will not reflect on either side. Let's execute. Bind 12 container with Bash Command and create a file called food dot txt. We are creating it within the Containers TMP Directory, which is the Mount Destination. Once we are done, that's exit the container. Now let's access the source off Mount Point, which is within home directory off Doctor host. We can see the Bind Data directory reflecting here. Let's open it And there we go. Full dot txt is present. Now let's try making changes the other way around. We have seen destinations update reflecting on source, no less. Update sourced to see if destination reflects the changes as well. Mind well that our container is shut at the moment and we are creating a new file called Hello dot txt. Let's go back to terminal and execute the container again so that we can navigate toe. It's the MP for temp directory hit ls to see the list of files and there we go. We had stopped the container with one file, but now it has to off them are Bine Mount is working successfully 42. Demo: Hosting Containerized 2048 game!: we are going to make containerized official open source to zero for eight on our doctor host live. And to do so the first step is get the files. We will clone this get report on our home directory. If you don't have get installed, please go through previous article. Once the Depo is clone, let's navigate into it and get the list of files. We have a bunch of files, including index dot html, which we will be using soon enough. Now run a container called 2048 from engine next latest image and used mine mount to mount our clone 20 for it. Directory toe html directory off and the next image. In other words, we are replacing the index dot html file and providing the necessary support for the new index dot html. As always, we are exposing containers. Port 80 to host sport 80 is zero. The container is up and running. Now let's open over browser and navigate to local host Port 80 80. There we go. We have our favorite 2048 on our Web browser and that to containerized. Let's see if it works properly, - do you? - It does. And it wasn't awesome. Expedience. Go ahead, try it yourself 43. Introduction to Docker Compose: Till now we have been studying the objects off Docker engine, but as we had mentioned earlier, darker ecosystem has more than one major component. Another such Iscar Docker compose composed, is a tool for defining and running complex applications with DACA. In case off working simply with docker engine, we need multiple DACA files for multiple parts or containers off a full fledged application . For example, we may have to create separate files for front end back in and other containerized blocks, which can be daunting to manage with composed, you can define a multi container application in a single file, then spin up her application in a single command, which does everything that needs to be done to get the APP running. You can define and integrate multiple doctor objects such as containers, networks, services, etcetera in a single file as blocks and composed will translate them to docker engine for you. In next lectures, we will have hands on experience with Docker, compose 44. Demo: Installing Docker Compose on Linux: as the title off. This demo suggests we're going to install Docker compose in this demo. We will do so by fetching the binaries off doctor composed from its official get hub release. And we will store this binary in Docker compose directory under user local Bin on our host machine. We'll do it with cool utility. Once the down loading is complete, we'll make these binaries executable and the installation process will be complete. Let's see if the installation is successful by running Docker composed version command. Well, the installation is successful and Docker compose was in 1.22 point zero. It's currently installed on our host. This is the latest version by the time this course is being created. 45. Demo: Structure of Docker Compose file: to work with composed files just like Second Model were again shifting back to commands and files together. Now, just to make sure, let's see what is our present working directory. And as you can see it, a CC underscored docker just to remind you again. The CC Docker has eight victories in total. Each one of them stands for a separate module. Currently, we're working on a six directory, so lets navigate there. And as you can see, there is a file called Docker Composed. Not Yemen will open this file. As we have studied in the theory portion, the composed file or the doctor composed file is a Yamil file, which defines multiple objects like services, networks and values. It is important to know that the default part for the composed file is always the present directory. Now, before we dig deeper into the darker, composed file itself, it is important to know a few bits and pieces about the Yamil files in general, where Gamel stands for yamma, we ain't markup language, and it has three basic data types. One his scale, er's like strings and numbers. Second, it's sequences which are a raise or list and third our map ings, which are hashes on dictionaries, which can be represented using a key value pair. The nesting off objects in a Yamil file is determined by indentation. You can find more information about Yemen files in the Lincoln below. Now, since we have got Tad Coward, let's dig deeper into this doctor composed file. First of all, let's mention the worsen off doctor composed that we're using, which is t 0.3 in this case. Next we help services services is the pattern object for the containers that we are going to create. If we are going to create a multi container application, we're supposed to use services. That's great. Our first service called BB. It's times for database now, just like we have been creating containers using command. Here, too, we need to mention a few parameters in terms off key Value pass. First of all, let's mention image. We're using my sequel 5.7 version. So we were right image as Key and my sequel 5.7 as value. Then we have container name, which is again a key, and my sequel database is the value here. The Walliams act as the parent key and the volume name and Mount Pot Act. As the Children notice the indentation between all the fields, the key feel or the parent feel his services, then we have further indentation for the services that we create DB or database. In this case, let's go ahead and mention the restart policy. We'll make the restart policy always so that we don't have to worry about the container being shut down and mind. Mint stands for environment variables just like Docker file here. Also, you can provide environment variables as key value pass by inventing them a bit further. We're providing my sequel, Jude Password, my sequel Database, my sequel User and my sequel, Password for Our Wordpress instance that will be created in the next service here. My sequel, underscored Database, which is going to be called WordPress, will be used as the name off the mice. Equal instance. Its root password will be word at Bad Press and my sequel, Underscore User and my sequel Password. The latter two keys are used toe allow WordPress to grant WordPress the access to the My sequel. Instance. Next, let's create another service in the same file called WordPress. Now look at the first key value pair or look at the first field, it says Depends on it creates an interdependency relationship between containers, which means that D be container needs to be created first, and WordPress will follow it later on. It is useful to create state full applications like this one. Here. The WordPress service depends on DB service and one status clear. Let us mention all the necessary feels for the WordPress container. We're going to use WordPress image Will name the container WD underscore front end. We're going to use the volume called WordPress Underscore files and we're mounting BAR did illegible w slash html directory to this volume. We are also mapping ports 8000 to 80 and we're mentioning restart policy as always, just like in previous service. Here, too, we're using environment variables. The database host is DB calling 3306 The WordPress DB user his WordPress and password is ABC at that 123 You can use any user name or password you like, but for learning purpose. This will do. Finally, we'll mention objects which are outside the boundaries off service or which are not the Children off services field. Such objects are volumes and networks. We haven't created any user defined network here. Neither have used any, so we don't need to declare them. But we definitely have used user defined volumes, so we need to declare them here using volumes key and the values will be WordPress underscore file and db Underscore data A quick revision off what we have done with this doctor composed file We have used to Key Feels services, and Williams on Held declared volumes, which are used in the services in the services feel we have created to services, database and WordPress. And we have mentioned container feels for both services, which include container name, container image, environment, variables and wall You Mount information. In next demo, we'll execute this composed file and see how the application walks. 46. Demo: Wordpress on Compose: In this demo, we will execute the doctor composed file, which we created in the previous demo. Now, if you are in the present working directory and if your directory consists off only one doctor composed dot Yamil file. All you need to write is doctor hyphen composed up, followed by hyphen d tack off course. The hyphen D tag is optional. And the only command which we're providing is doctor composed that, as you can see, it, is creating objects one by one. And if you notice, even though we didn't provide any network information in our previous demo, first of all, it is creating a default network with the D Fortman truck driver. It will be a bridge network. Then it s creating the volumes WordPress files and DB data from default driver so their scopes will be local. Then it is creating services. If you notice db service is created before WordPress service because workplace is dependent on DB. Now let's have a list off running containers to see if our service has created both the containers. And as you can see, my sequel database on WD underscore front end. Both of the containers are up and running for more than 30 seconds. Now, if you see further in case off stability, underscore front and container. Even the port mapping information is available. Where 8000 port is map to the 80 port, you may wonder, How did this happen? We did not provide any information regarding any network. If you remember when Views Doctor composed up common Docker composed first ofall created a D Fort network, this network was created to make sure all of the network requirements off the preceding services will be fulfilled by it in terms of bridge network, which means that both off these containers are connected to the same default bridge network so they can talk to the outside world and they can talk to each other. No, let's go to our Web browser and see what is being hosted on our local host. As we can see, the local host is hosting the default beach off WordPress installation, which means that WordPress installation and hosting was successful. Now let's play a bit more with this WordPress and see what we can do with it. Well, no, who? We have added a lot of content to a dummy post and now it says that the Post has been published. If we click on view post button, we should be able to view how our post looks. So let's do that. The Post looks neat, tidy and well structured. It means that WordPress installation was not only successful, it is working just smoothly. Now let's work with my sequel. This may not look as exciting and riveted as WordPress webpage, but we're back to our good old terminal. Now let's again get a list of running containers we have already worked with WD Underscore Front End. So now it's time to work with my sequel database Container. Let's turn it the Doctor exactly D Common and win a bash Common on it. We are into the container with root privileges. So let's list out the directories. Let's navigate to wall slash lib slash my sequel directories to see its condom further. And as you can see, the information about WordPress user has already been added to this container, which means that linking off these containers was successful and the information was exchanged successfully as well. Let's on another instance, off my sequel container, but this time as a client, as you can see, we're linking this container with our previous my sequel underscored database container, and we're also providing information about the communication port and root user credentials you may have to connected to afford religion. It work as you can see the client site off. My sequel is now active, and we can see what is being hosted on the my sequel database over when we learned the query. Sure data basis, apart from the system provided or default databases like information Schema, my sequel Performance Key, Mom or SIS itself. We also have the fifth database, all WordPress, which has been derived from the service off WordPress front end. If we go further into this, let's use the quickie use WordPress so that we can dig deeper into that database. No, our database has changed. Let's take a look at the tables inside WordPress database type short tables, semi colons and hit enter. And here we are all the required tables for a successful WordPress instance. Although we don't need to doubt whether this was working properly or not, because WordPress was already established and it was working so smoothly. But this gives us even stronger belief and understanding on how linked services work with Docker. Compose 47. Demo: Introduction to Docker Compose CLI: Now that we're done with Dr Composed Gamel file and its execution, let's switch to Docker compose command line. Our first command in the series off Docker compose commands is doctor composed Conflict. This command is used to view the composed Gamel file on the terminal screen. As you can see, it provides all of the information about both of services and volumes, which we had mentioned in the previous yammer file. We can also extract specific information from the AMEL file like services. The next come on is docker compose images. This command is used to list out all the images used to create containers for services in composed files. As you can see, both the images are available here, which were used in the previous services off Doctor composed yamma file or Next command is doctor composed lobs. As you might have guessed, this command is used to fetch the long output from the service. Since we have a lot of logs, let's narrow them down a bit. Using doctor composed logs, hyphen have been tail equal. Sten the tail flag allows the last 10 logs off both the services to be printed on the STD out or nominal. As you can see, we have last 10 logs off both the services or containers, my sequel and WordPress Just like Dr P s. We helped Dr Composed Bs where we can see both the containers running along with other information such as state, which is up port mapping information and inter point commands. Our next command is doctor Composed Stop, which is used to display all the running processes inside all of the containers. Which means that in both the containers my sequel database on WordPress front and these are the processes which are running each processes have the individual process i d and Barron process i d. The structure off process and parent child relationship depends on the base image used in the creation off these images. And finally we helped Dr Composed down. You can consider it as a cleanup command or contrary command to docker compose up when we hit enter it stops both of the services, removes the containers and removed additional resources like networks. In next model, we will have look at probably the most exhaustive feature off Docker, which is Docker swarm 48. Introduction to Container Orchestration and Docker Swarm: till now we have been revolving around containers on a single host. Single host would generally mean one machine on one VM. They definitely have limited resources, and it is totally fine, as long as your purpose is to solve something not so resource heavy like a static landing page on a block, and one guy would be more than sufficient to manage it as well. But that's not the only application why we use containers. There are giants like Google and PayPal who, how millions off users a day. In their case, the amount of containers would be staggeringly high, and they all may how to communicate in any topology at a given point of time. In fact, you and if we don't focus on such large applications, a dynamic website keeping track off visitors and collecting data from their actions would also need way more containers than usual. Blawg. Let's say you and if we did manage to deploy all of these containers on the same host somehow but we might turn out off Resource is any time are due to that the performance may be affected severely. Plus, if the host goes down, our side is doomed for sure. What should we do then? Well, a simple solution would be deploy them on more than one host and get them managed by more than one develops. Ingenious. Sounds fancy, but they would all be eternally scattered. And to make sure they remain in sync, we may have to run another set off micro services in the back end. Plus, hiring more people for doing the same task would also be less economic and none off the individuals would get opportunities and growth they deserve. So what true then? Well, it seems like we need someone who can make all of the's hosts collaborate and allow us to manage them simultaneously from a single instance. Kindof like cluster. In fact, exactly like a cluster off docket hosts. This way our containers will be in sync. The performance won't be reduced due to resource Cassidy. They can be managed from a single endpoint. We can even think off replicas and backups off our containers for the cases where one or some of our hosts may go down and life will be happy. But who is that? Someone? The container orchestrator is a tool used to provision should do and manage containers at last skill over one or more clusters off multiple hosts, as we have mentioned before. While Docker Ecosystem has many offerings, some off them being less significant than the other ones, it has three major tools which should be learned by every container enthusiast. We have already seen Docker Engine and Dr Composed. The next stop at our journey off learning containers is the orchestrator develop and provided by Docker called Docker Swarm. The idea and implementation a pretty simple. Here we take a set off Docker hosts and connect them using swarm mode. One of thes hosts manually initialize, is the cluster and becomes the manager off the cluster. The manager provides a key which can be used by other North to join the cluster. Once they joined the manager, they become worker nodes. The analogy is pretty self explanatory here we as users communicate with the manager, and the manager communicates with the workers quite like a management hierarchy. Often industry, actually, just like Docker, compose with demand. Our actions inform off a service which manager translates into smaller tasks and provides them to workers to get handled. To do all of this manager is equipped with a set off useful tools such as http AP I endpoint, which makes it capable off serving our service request and creating objects out of those services. Orchestrator, which passes task translated from services to workers. Allocator, which allocates internal clusters i p's to the workers and manager it sells. Dispatcher with decides which node will so which task and gives this information to the orchestrator. And finally she doula. The past provided by orchestrator are idle. They don't run as soon as they get located. She doula signals workers to run the task which they have received. And so it also decides which task well done first and which won't. As for workers, they're pretty simple compared to manager. They have two key components in total Worker, which connects to the dispatcher off the master to check if it has any task to receive from the orchestrator, an executor which literally does what its name suggests. It executes the tasks, which means it creates containers, volumes, networks and runs them. You may have noticed that Dr hasn't been the most creative form, as faras naming the tools is concerned, since Warm is an orchestrator which has a component called orchestrator running on its manager and worker has a component called worker. We can't Jean stays, but we can make sure that we don't get confused by it. So in this course, whenever we refer toa orchestrator and worker, we will indicate orchestrating tool in general and worker Nords. If you want to address the internal components instead, we will call them out specifically, just like every other topic. We also have a bunch of hands on demos for swarm. But to understand how deploying containers on a cluster is different from deploying them on a single host, take this example. Let's say we have a service which needs three replicas off engine X containers hosting the same content once we provide the service to the manager. It divides this into three smaller tasks and locates one task to each worker. So all of the workers would be hosting one instance off engine X Web server container with death said, By now, you might even be wondering about what would happen if Swarm faces failure. In other words, what if one or more nodes go down? You know the answer. Let's get to the next lecture 49. Can Swarm handle failure?: can swarm handle failure. One word answer is yes, it can, but more interesting parties. How? Let's take the previous example off the service, earning three replicas off Engine X, each hosted on one worker or master on. Our workers are healthy and running. What if one of the workers go down? Let's say in this case, Worker three went down. If that happens, the task three will be shield on one of the other workers. Once Worker three is back to its turning straight mastery might get more back to it. And if it's not causing any overload on worker to, it, may just stay there and work. A tree might be ready to host other tasks when they arrive in future. In a nutshell. If one off the North's go down, the other North can handle its load. If the master goes down, though, the workers perform a mutual election where one off the workers gets promoted and the cluster starts working again. The next question would be how many nodes can go down without affecting Swarm well, to make sure that the swamp cluster functions properly at least more than half off, the Nords should be working minimum number off required. Working norms for a happy swarm cluster is equal to the number off daughter Lourdes, divided by two plus one, which again means more than half. 50. Demo: VirtualBox installation: Let's start setting up a doctor swamp cluster but installing ah, hyper wiser on our host machine. If you're wondering what is Ah, hyper visor, it is a piece off software which allows us to create virtual machines. First of all, here is the sources dot list file, and as you can see, there are a lot off links already available and most of them are for updates regarding open toe or other software. We have added the line. Let's save the file. Now let's get the GP geeky for our virtual box left some pseudo a pretty get update command . And as you can see, just beneath sublime text, we can see that virtual box has also been updated. Now the application is added to the list off a pretty package Manager. Let's install it type pseudo epic get install, followed by the version off virtual box. Here, we're going to install Virtual Box 5.2. Once the process is complete, let's see if we can find Virtual Box and our list off Software's And here we are. Oracle Virtual Box has been installed successfully. It is up and running 51. Demo: Docker Machine Installation: Now let's install a tool card doctor machine. It will set up multiple hosts for us, which will act as individual nodes on a swarm cluster. We will install Doctor Machine from its official get report. First of all, we will call it and then we'll install it under user local bin directory. Once the installation is complete, let's verify it by typing doctor machine version. Dr Machine has been installed successfully with version 0.14. 52. Demo: Setting up the Swarm Cluster: Let's create our first node using Docker machine. Create command. We're using virtual box as the diver and we're naming our note manager while the note is being created. You can see the Doctor machine is using a custom toys called Bhutto Docker, and it is using its eyes so image to install it on a virtual machine for the information. Boot to Docker is a minimal Linux OS customized for containers to run smoothly, being lightweight at the same time. Let's see if the North has been created. Used Doctor machine ls Command and Manager has been created. It is earning doctor version 18.6 and it also has its dedicated I P, which is 192.168 dot 99.100. Similarly, we can also create a couple off more nodes named Worker one and worker too. Once we're done with their creation, weaken done, Doctor Machine ls again to see if both are running perfectly. And here they are. Let's stop this Manager North using Docker machine stopped Manager command When we list our notes manager exists, but it is stopped. We can start it again using docker machine start Manager Command If we want to find specific information about a node, we can use Docker Machine I p Manager which provides i p off manager Nord Similarly, we can get eyepiece off worker one and worker to Nord just like every other object in docker ecosystem we can use inspect command with Dr Machine Note as well Use Docker Machine Inspect Command followed by the name off the North which is manager here. As you can see, the Inspector Man provides a lot off information about the manager Nord including its machine name I p address assess its user and port, a sausage key part and some other useful information Finally, let's ssh indoor manager not using Docker Machine Ssh Command followed by name off the node which again here is manager we have navigated to the shell off manager note 53. Demo: Initialising Swarm Cluster: in this demo, we have three terminals, one for each note. First of all, let's have a list off Nords with Dr Machine ls Command. As you can see, we have manager, worker one and worker too. Now let's assess age indoor manager just like we did in the last demo. Since we want to make this manager Nord manager which is quite the little dome for its role , let's initialize our swarm using Docker swarm in it command and advertise its i p address to the other nodes. Once we hit enter, the swarm mood gets initialized and the current Nord, which is managing north, becomes manager. Now if we want to add workers to this manager note we can lend doctors form joint common from respective worker nodes along with the token which is generated from this manager Nord . This token is a unique I d which can be used by other notes to join our manager as a part off its cluster. In case we have lost this command or token, we can get it back by typing Docker Swarm joined token worker But this command will only work if the manager has been initialized with swarm mode. We will use this doctor swarm joint command along with its Tokcan from both worker one and worker to to make sure that both off them joined this cluster as workers while the current road remains manager. As you can see, the command has worked successfully from Worker One note and it has joined the swarm cluster as a worker. Similarly, the command was successful in worker to as well and regard similar confirmation. 54. Demo: Working with Swarm nodes | List and Inspect: now that both the North's have joined the bluster as workers. Let's verify it. Using docker note L s command. Take a note that this is part off. Doctors swarm Command Ling. Once we hit enter, we get all of the three notes along with their host names. All of them have their status as ready and availability as active. And if you can notice, Manager also has the status off being leader. This is applicability when we have a cluster with Morton one manager in which case one off the managers will act as leader. There is no confusion here, since we only have one manager and to worker nodes, So our manager will be leader by default. Now we can inspect our manager and worker north from managers Shell itself. Let's type doctor note inspect followed by self along with pretty flag were mentioning self because manager wants to inspect itself. And as you can see, what we get is the note i d. Its host name joining Timestamp status and some other information like platform resource is Injun version, which is doctor Injun version. Here it is 18.46 point one community edition and some security certificates. We can hit the command for worker one and two as well, and we get respective information about both off them. As you can see, all of the's three notes have different I p. But rest of the things are pretty much the same. Of course, their roles are different, which will be explored in further demos. 55. Demo: Creating a Service on Swarm: Let's use Docker Swarm for the reason it is designed, which is toe how multiple replicas off a container or to you services with multiple container themselves. We'll create a service called Web Server from latest engine X image and help three replicas for it. We have also mentioned port mapping information with hyphen P flag. Once we hit enter, you can see that our service has been divided into three tasks and each task has been carried out individually. Once the tasks are complete, the service has been verified. And once the service creation is complete, we can list it using docker service L s command. First of all, we have service i d. Then we have the name of service, which is seem as we had provided with the command Web server. Then we have moored off service. It is replicated, which means the same image has been replicated more than once, and multiple instances for multiple containers are created from the same image. We helped three replicas in particular. The image which has been used is the latest engine X, and we also have port mapping information for TCP. If we want to have a look at the container sending inside the service. The command is pretty simple, just right. Doctors Office Bs, followed by the name of the service. Here we have three containers, and the naming convention is pretty simple. Their names are Web server 0.1 Web server dot to and Web server 0.0.3. They're up and running for about the same time, and all of them share the common injure next latest image, just like we had done with Dr Composed. Let's inspect our service. And as we go further, along with the genetic information, we also get some additional information, like the moored off the service, which is replicated, and details regarding all of the hosts or all of the machines where each and every container off the service is provisioned. Unlike Dr Service Bs, if we regularly run doctor PS hyphen, a common on any off the Nord, we will get to know that each off the note is only running one container. That is because the service has been deployed across the cluster, which means that the Lord was divided evenly since we had three replicas. All of these containers were should yield on an individual Nord Web server 0.1 was should you'll on manager Note. Web server dot to was should yield on worker one Nord and Web server 10.3 was shed your on worker to note. Just like a regular container. We can inspect this Web server 0.1 as well. Now this means that all of these three notes are learning at least one instance off engine next Web server. So all of them should be serving in the next default webpage on their respective I P addresses on their port 80 80. Let's go to browser and checked this fact with our manager, Nord. See that we're navigating to the I p off the manager, which is $192.168.99.100 and they're mentioning the Port 80 80. It seems like a success. Now let's do the same with worker one and worker to This means that the service is earning successfully and doctors warm is hosting engine next Web server on all of the's three notes 56. Demo: Making a node leave your Swarm: now that we have deployed our engine ex service across the swarm cluster successfully, let's think off some more innovative use cases. For example, What if I want to take down one off my notes for maintainers? Or what if one off my notes actually goes down here? We will test it out. The safe way to make a note leave the cluster is to dream it. We can do it. Which doctor note Update availability Command, followed by the action and the name off the North. Here the command will work like docker Note update availability Flag Dane Worker, too. And what we get is the name off the note as the confirmation that it has drained out. Still, we can verify it by typing doctor note a less, and we can see that the status off worker to note is still ready. But availability is dream, which means that the note is up, but its availability is drain, which means that no containers can be should yield on it. When we drain the note, the container off the task should do on the note gets transferred or re should you'll do one off the other Nords. Let's verify it using Docker service PS Web server. And as you can see, the Web server 0.3 container has been shifted from worker to to manager, and it has been running since 42 seconds, which is about the time when worker to was drained. On the other hand, if we use Docker PS on worker to now, which has been drained, we will see that the container has exited the note and is now in quite dead state. Now let's remove this note from cholesterol whatsoever, and when we try to do that, we get this error from Docker Demon. The reason behind that is the North might be in the brain condition, but it is still up. Doctor is still serving its FBI, so we need to make sure that it leaves the swamp cluster first. Then it gets removed from the Masters list. Let's use Docker Swarm Lou Command from worker to note. Once we do so, we get a pretty clear Nord. That note has left the swarm. If we try to run the same command again on manager note, we'll see that the worker do note has been removed successfully. We can verify it by listing the Nords again and what we will find is our cluster made off just two nodes. Manager and work of one. 57. Demo: Scaling and updating with Swarm: in this demo, we will perform a few more orchestration related tasks if you remember clearly or service. Web Server had three replicas off latest engine X image. Let's skill over service and increase its number off replicas to six. We can do it with Dr Service Skilled Command, followed by the name off service, and the number off replicas. Once warm, has verified the scaling. We can do it as well, using doctor service PS, followed by the service name and, as you can see instead of three. Now we have six containers running on engine X Latest image. Three off them are scheduled on manager and three off them assure you on worker. All of those six are in running state and three off them seem to be quite new, as you might have expected if we don't docker ps hyphen e. On both manager and worker one. We'll see three containers running on each off them, while worker one has to new containers. Manager has one new container. Furthermore, we can even roll out some updates on all of these six containers. As you know, all of the's containers are running on engine. Next latest image. We can change it to engine X Alpine. If you're wondering what is the difference? Well, the latest version off Engine X is built on top off open toe base image, whereas Alpine version is based on top off minimal Alpine Lennox image. Let's use the command Doctor Service update for load By what kind of feel do we want to update? We want to update the image off the service. Once we hit, enter all of the's task. Get updated one at a time. Once the update process is complete, we can verify it with Dr Service Inspect Command and let's make sure that the result off the inspector man it's pretty printed. As you can see, the service more is still replicated. The number off replicas ISS six. If we go to the container specifications instead of showing Engine X latest, it shows engine X Alpine, which means all of the containers are switched from engine X latest toe alpine image. Finally, we can remove or service using docker service RM command followed by the name off the service and as a notification, we get the name of service. Let's type Dr P s hyphen A and as you can see every container is being taken down one by one. If we do the same on worker one note, you will see that all of the containers are taken down and also removed. Let's wait for a while and use the same command on manager as well. Well, now you another manager is empty. Finally, let's clean up or cluster by making sure that worker Nord is also leaving the blaster just like we did with worker too. We'll also make worker one. Leave the cluster voluntarily using Docker Swarm Leave common We're back toe having one doctor host which is manager. 58. What about the more popular one?: Dr Swarm is pretty useful. But whenever we talk about container orchestration, one named dominates the conversation, which is kubernetes, you might wonder. Aren't we already done with orchestration? Well, not yet. Docker, Swarm and Kubernetes both co exist in the market and even darker itself has acknowledged it since it broad support for communities with its speed enterprise version. Plus, as far as my knowledge goes, there is no such thing as managed. Docker swarm on any off the popular public cloud platforms, whereas managed kubernetes is one off the salient features off Google Cloud. That's not all. Azure and AWS are catching up pretty fast, too. These are more than enough reasons to learn communities alongside swarm, but we should know the advantages and challenges off boat. Let's start with the nature. Swarm is a part of doctors ecosystem, so all of its features act as an extension off doctors own capabilities. Where's Kubernetes is an entirely different product managed by C. N. C. F, which stands for Cloud Native Computing Foundation. Since warm belongs to Dr Ecosystem, you didn't face any trouble adapting its terminologies or concepts because most of them were in line with what you could already do with Docker, so some is easier to set up and adapt. Nothing is ever too difficult once you get a hang off it. But setting up and adapting communities introduces more new concepts compared toa swarm so you can definitely college relatively difficult. Plus, Kubernetes introduces a whole new command line where a swamps command line is pretty similar to Dr Cli itself. As far as the utilities go, Docker Swarm dies less deeper into the field off orchestration. Where's kubernetes brides? You a lot more exhaustive orchestration. Functionalities monitoring swarm can be tricky, since it either In walls, third party tools are paid services by Dr Enterprise. Where's Kubernetes Broich's native support for logging and monitoring? Moreover, doctors home doesn't only have less functionalities compared to communities, but it also becomes difficult to manage after having more than 15 hours or so, because you may not have sufficient control over she doing certain containers on certain north, which can be mind boggling to manage. Where's in case of kubernetes? We have a lot more freedom and fault tolerance. The final control allows us to group north as we want, and abstain are containers from being shadowed at certain nodes. In fact, Communities has shown promising performances, even in the case, off more than 1000 nor abs. Due to all of this, Even though doctors home has good community support and featured updates, Kubernetes has huge support and has turned into a complete develops buzzword by all positive means. All in all, it means the greater your application, the more likely you are to use communities rather than swarm off course. Not everyone targets millions off audience and ever scaling clusters for them swarm might be enough. But for you as a learner, the journey must not end before learning exciting aspects off kubernetes. 59. Kubernetes: An origin Story: before we learn kubernetes. Let's take a look at its popular origin story. Long ago, there was a girl search engine called Google. It was initially developed by Mr Page and Mr Brennan during that PhD studies in a not so fancy work area. This infrastructure was minimal and users were limited. But the idea was game changing. So soon it turned into a fancy tech company with greater technical infrastructure and increasing number off users. But that, too, was just the beginning. Google turned out to be one of the biggest tech chance with billions Say it again. Billions off users across the globe do. Google became a noun, Googling became a hobby, and Google stock became one off the prime investments. All of this involved endless efforts, and only by passionate, ingenious and yet turn out to be forest off servers. Back then, there was no doctor, so Google engineers couldn't just go to you, Timmy and take up. Of course, they had to dwell deep into the roots off computing history. Then they came up with the realization that Lennox already had a solution called containers , which could be set up using name spaces and see groups in containers on abstraction at application layer, which packages goods and dependencies together. So they started using them. But they also needed someone who could orchestrate their containers on a large scale for them and that someone was kubernetes. This is how communities came into existence, and rest is a history. 60. Kubernetes: Architecture: from a bird's eye view, the architecture off Kubernetes cluster would look pretty simple. We helped to types off instances Master and Nords, the border and communities, but self different purposes, just like manager and worker off swarm. Let's have a deeper look inside. Master Master acts as a controlling note, and while working with Kubernetes, we communicate with master. For the most part, it runs a set off applications, which include Cube, a PS, over which serves all of the rest requests provided by user and obtains responses from other Nords. You can consider it as a central serving unit, affront and off the cluster. Then we help Cube controller manager with Selves as a parent or managing process for a number of controller processes. These controller processes managed the controller objects like replicas that controller or deployment controller, which will study soon enough. Next, we help Cubes fibula, which she duels over container under a supervisory sandbox environment called part cubes. Abdula also decides which off the nodes will be solving, which set off containers. The A P I requests from communities controller manager and cubes should Ula are sold by Cuba. AP ice over. Finally, we help eight CD, which is the distributed key Value Data store. Eight cities Toast the data obtained from all other components in key value Pass. This may include our cluster configuration input desired state, actual cluster state event logs, object details, anything and everything you see. The only communicates with Cube MPs over for security reasons. So in a nutshell, Cube controller manager controls objects cubes regular should use containers and the AP eyes are sold by Cuba. AP a server, which stores all of their data as a key value pair in 80 and fetches the data from the same place as well. The simple yet robust architecture off master is one of the reasons for thriving success off kubernetes. Now let's talk about nodes. They're pretty simple compared to master the only than two components, to be precise. One is Mr Talk talk, and one is Mr Do do que bullet is Mr Do do as it performs the action suggested by master components like Cube Sevilla AP ice over. Our controller, manager, Master and Nords are virtually are physically different machine, which means Cuban. It acts as a supervisory process on North to allocate resources and create containers or point processes. Que Proxy is Mr Talk Talk. It manages notes, communication with other Nords Master and the world outside the cluster. In Big Lucky's, it is Cuba AP a server off master, which talks to Q proxy off the node. So cue a PS over gets data from eight CD. It gets requests from controller manager and she doula and passes it to the north. Where. Q proxy for rescue proxy. Process it to the cube lit, which in return provides the response to these requests, which again passed to master. Why a proxy and stored on a CD. But if the cluster is hosted on community supported cloud platform Cube controller manager talks to Q proxy while Cloud VPC or other such relevant infrastructure since it has a component called cloud controller manager. Now let's focus on how we as users, interacted. Kubernetes users talk to the master of Ackermann's Let's say we command Master to create an object. Master passes this instruction as an FBI request. Once Cuban, it performs his request. It returns the state off the north as response which master stores in its eight CD and passed to US objects can be off multiple types, such as workloads, conflict objects, connectivity, objects or storage objects. Where estates are off two types desired state and gun state. Kubernetes always keeps on checking if the desire state and current state match if they don't match, Kubernetes tries its best to make sure that they do. And if they do match it keeps on checking again and again to make sure this harmony is it affected. This endless loop is called Reconciliation Loop, which make sure that our cluster is in the most desired state as much as possible. All in all, this is how communities infrastructure functions. Next, we will go through the objects off communities and learn how to use them. And while we do so, you will get a broader sense and deeper idea off how this infrastructure is used while creating and using objects 61. Demo: Bootstrapping Kubernetes Cluster on Google Cloud Platform: open your favorite Web browser and goto this link console dark cloud dot google dot com This is the link for Google Cloud Platform Dashboard or G C P Dashboard. But before we can go there, we need to sign in our Google account and do your I D and password and hit. Next we get a pop up, which asks us to confirm about the dumps and services off JCP and also provide Google or residential detail. I'm putting India. You can put your own country, and then we have a choice, whether we want any feature of days or survey emails from Google or not. Well, since I don't want to receive them, I will click, no click on, agree and continue, and the problem will be gone. What you see in front of your screen is to getting started view off gcb dashboard. We have a bunch off most used products like Compute Engine, which we will be using to create virtual machines. Cloud storage, which is Google's affordable block storage and Cloud sequel, which is managed my sequel or post Greece equal from Google. But before we can use any off piece, we need to set up something called building, which means we need to initiate are free trial off Google Cloud Account by doing which we will receive a $300 credit, which can be spent within a year. Click on Try for Free. It seems like it is a two step payments at a process. Google is explicitly stating that we will get $300 credit for free for starting the style account. And even once the credits are finished, we won't be charged unless we agree to be built. Step one is pretty much similar to what we have done previously to the prompt, which appeared We need Toe agreed to the terms and services off Google, and we need to tell them whether we want email set up or not and click on agree and continue. Step two involves personal information like a town type, which can be either business or individual tax information, which can be registered or unregistered individual building name billing address etcetera. Once you have filled in all of these details and you scroll down, we get to the payment methods. Currently, the available option is monthly automatic payments and to enable them we need to provide credit or debit card details. If you live in a country like India, where Elektronik transactions are protected by one time passwords or three D pins, your debit card will not be accepted and you will how to use a credit card. The bottom line is whichever card you use should help feature off auto payments once you enter your details. Hit on start my free trial Burton and the next Green says that Google is creating a project for us, and this may take a few moments. It seems like our free trial is set up now. We have $300 credit on our Google Cloud Platform billing account, and we can get started by using GCB services. So what do we want to try first? Well, I want to try computing and applications, So let's click on that. These services are these provisions from Google fall under the category off computing services. Now, if you take a look at the left hand side pain, we have multiple options here. Currently we are on getting started tap, but the other tabs are building marketplace AP Eyes and Services support, which would provide consumer and business level support I am an Edmund, which is useful for setting of permissions and rules security, etcetera. Let's click on building. This is the overview page off our billing account, and it says that we help $300 for 22,183 I nr ₹4 remaining in our credit. Also, the 10 Your remaining for credit is 3 65 days on a year because we have just started using JCP. If you see below it, we have a project linked to this billing account, which is my first project in case of Google Cloud Platform Resources Services provision. Etcetera are managed under projects, which means that one G C P account can have multiple projects for multiple purposes. We have a default created project, which is called my first project. JCP has provided it to us, and if you remember, we had previously seen a screen which said, creating your first project Well, it was this project who will be using this project throughout this course? Go to the upper pain off our dashboard and click on the drop down menu off project, which appears right after the Google Cloud platform. Now let's click on home button and let's let my first project as our project once we have selected the project, the view off or dashboard changes. And instead of having a getting started view, we're having our project specific view where the information is divided into multiple cards . The first guard is Project in four, which gives information about the project name, Project I. D and Project Number, which are unique across the globe. And we held The resource is card. Right now, we don't have any resources profession. So it says this project has no resources and we help it be ice hard. The more we used gcb FBI's, the more fluctuations we will see in the graph off this card. Currently, we haven't used much of the FBI's, so the graph is pretty much plain apart from one spike, which might have generated when we activated or free trial. Then we have Google Cloud platform services status, and it says all services are normal. Next up is better card. We have no signs off any errors, which makes sense because we haven't used any resources in the first place. Then we have some miscellaneous cards like news documentation, getting started, etcetera. Let's click on the navigation menu icon off TCP dashboard, which is also called the Hamburger Icon, or three horizontal lines, which are on the top left corner off our Dashboard View Goto Compute Engine section and click on Veum Instances. Since we don't have any William instances created whatsoever, we're getting this response. We have three options. First, to take a quick start door second toe, import some VM for third to create a VM or virtual machine by ourselves. Well, let's create a virtual machine. Now We're guided towards your machine creation page where Google has filled in before data for a standard words your machine, but we'll modify it a bit. That's changed our instance. Name to master, then we have to. Location related choices, which include region and zone region, indicates the overall place where a zone indicates a particular data center within that region. Let's change our region toe issue south one, which redirects to Mumbai, and accordingly, we're choosing issues out one. See you can choose your closest region and zones accordingly. In this course, the choice off region and zone will not matter that much, but if you are making some performance intensive applications where you might required a certain type of resource is like GPU. You may have to choose regions and zones, which provide those resources. Having said that, next up we have machine type. The default value for those is one V CPU, which means one virtual CPU and 3.75 GB off memory. It means that are watching. Machine will have one virtual core off CPU assigned to it, along with 3.75 GB off RAM. Let's increase both of these provisions toe to these abuse. And so in 0.5 g b off memory. Next up, we haven't optional choice to make, but we want to deploy a container image to this William instance or not. Well, we don't want to deploy a container image because we will be doing all of those things by ourselves. Extra is boot disk, which means which operating system will be used on this William. The default is Debian Lennox nine, but we will change it to open to 16.4 We can also choose between SS, deeper assistant desk or standard persistent disk and both of their limits of 65,536 gigabytes. We will stick to stand a persistent disk, but increase the limits to 20 GB Let's head select. We will keep our service account as compute engine default service account, and we will allow full access to all cloud AP eyes. Although we won't be using most of the FBI's. Having access just avoids potential errors. Finally, we have firewall settings where we're going to allow all http and https traffic that's had create button were redirected to William Instances Page and our master instance has been created. If we click on it, we can see the information that we have provided earlier. On top of that, you get another bunch of information such as the CPU platform, which is Intel Sky Leak Creation Time network interface details, firewall details, boot this preferences, etcetera. Let's go back to William Instances Page. If we click on the check box right beside the master instance, we see a few buttons lit up. They respectively allow us to stop if he start or delete the Veum instance, but we won't be doing any off that because we want to keep this instance and want to work on it. In fact, we'll create two more off such William instances, and we will name them No. One and No. Two. It is recommended that you create all of these instances in the same region. There we are. Our two other instances are created. You might be wondering. The instances are created means we EMS are ready. But how do we use them? Well, the simple most option to connect to it would be toe ssh into it. And the moment I said, Assess, Age. Hi. I know your site is stuck on the ssh button, but before we click on that, take a look at the internal and external I p off. All for Williams. Let's connect it. We have multiple options, but we will choose the 1st 1 which is open in a different browser window. There we are. We are connecting to the master. Instance off. Master William. Instance. Off G C P compute engine. The connection seems successful. Let's clear this screen out now. We want to bootstrap a kubernetes cluster on these instances, so let's start by getting root privileges. Run the command sudo su. That's in a standard update using apt get update. Once the update is finished. Let's install Docker using applicant install docker dot io and provide the flag hyphen y for default. Yes, let's check out a four doctor is installed properly. Run docker worsen and it says that we are running Docker 17.3 community addition, which is perfectly fine because that's what we wanted to run. If you're wondering, why are we running Docker? Well, Kubernetes is just a orchestrator. It still needs a container ization platform. So we are installing Doctor. Oops, looks like I closed the window. Well, let's open it again. Now let's install some basic dependencies off kubernetes like https and call the installation seems successful. No, let's get the G, PG or GENIO privacy Guard key for kubernetes and added to the system. We get the response okay, which means that the key was added successfully. We're adding the line the eat B, which stands for Debian, followed by the link, which is http app dot kubernetes dot io slash kubernetes Zaenal mean at the end of our Sosa start list files were doing this so that are a pretty package. Manager can access kubernetes libraries whenever it is performing updates. Let's verify if the step was successful for an app. Get a break again. And as you can see, our last get seven entry includes update received from Kubernetes Ural. No. Let's install all of the components of kubernetes, which include cube, lit Cube, Adam and Cube CTL, where an applicant install que blade Cube Adam cubes ideal accompanied by the flag hyphen y looks like the installation is complete. Let's exit our message and log in again. Consist CTL Command and set our default net bridge I P tables equals one. This is a prerequisite for installing the part network which we will be using while setting up the kubernetes cluster. No, let's initialized our kubernetes cluster using cube. Adam, innit? Command seems like the Cluster initialization is in progress. And once the preflight checks are complete, we're getting a lot of certificates generated. Once the initialization is complete, we had provided a few suggestions. First off all, we have a confirmation that our kubernetes master has been initialized successfully. Next up, we have a bunch of commands which should be used. If we want to use this Lester as regular user and not just root user. I recommend you copy all of these three commands at a safe place because we will be using them later on. Next is a suggested command to deploy apart network on the cluster. But we don't need to copy that. And finally, we have a tube Adam joint Command, followed by the token of incinerated by our master and 64 digits certificate, which we must copy and save at some place because this command is extremely crucial and will be used by all other north to join our master. Once you have copied all of this, let's clear out the terminal. Before we proceed any further, make sure you don't leave any unnecessary white spaces and you copy the command. Now let's turn cubes. Ideal apply command followed by the U R l off our partner network configuration we're using . We've net So the u R L starts with cloud daughter, you don't works, but you can use any part network you like, such as flannel, calico, etcetera and the details for other part networks can be found at kubernetes documentation. It seems like old apartment is set up. Let's check before cluster is working on Cube CDL get pardes, followed by a flag, hyphen, hyphen, all name spaces. You don't need to dig too deep into this command because we will be going through the whole kubernetes command line step by step. All you have to notice are the familiar names such as each CD Cube MPs over you, Controller manager que proxy cubes should ula etcetera? All off thes are components off kubernetes architecture which we have studied in theory. And now they're deployed on your Google Cloud. William instance. Of course, these were the components off a good win. It is master for note instances, we will have different components. Now That's grand regular user access to our master around the three commands one by one which we had copied earlier. And to see if for kubernetes is working on regular user or not. Let's in the same cubes ideology. It pours command again and it seems like all off the parts are up and running, and kubernetes master is accessible from regular user as well. Now let's get back to our ji cpv VM speech on ssh to note one. Let's get the dude user access again Now run Cube. Adam joined command. If you remember, we had done Cube Adam in it from Master and we had received a token from day Now we're using Cube Adam joined from no instances to join the master as the members off cluster The tokens which we are providing are the same that we had received when the master was initialized their son Endo. There we go. Once the joining process is complete, we get a suggestion that we should run cube CDL get notes on Master to see if the known has joined the cluster. Well, we'll do it. But after making no to join the cluster back to D c P v EMS, let's ssh to node to nothing too complicated. Exactly the same steps which we had performed a node one. Get the root user access and run the cube Adam Joint Command with the same tokens. Once that is done, let's follow their suggestion and head back to master. We have already set up a non root user cube CDL access on master so we don't need to run pseudo so again, Just simply run cube CTL get norms And there we go. We have all the three north listed, but if you notice no one is not very yet. Nothing too much to worry about. Let's give it some time. And then the command again. Bingo. All of the north have joined the cluster successfully and are ready to work on. Now that our kubernetes cluster is properly set up, we are ready to explore different aspects of kubernetes like workloads cubes, ideal command line, etcetera. See you in the next lecture. 62. What are Pods?: parts. Till now, I have been avoiding using this term while explaining the architecture as much as possible . But trust me, Kubernetes is all about parts. So what are parts if we keep this doctor set of architecture in mind where containers are on top off, Doctor, This is where community stand right between Dhaka and Containers. But Kubernetes doesn't host containers as they are. It encapsulates them in an environment or object called barred. Ah, part can have one or more containers inside it, but most of you will find one part per container. Bards fall under the category off workload objects. Here are a few things about parts which you should remember. They're the smallest unit off orchestration in kubernetes and everything revolves around them. The only way to interact with containers in communities is through parts, so they're quite absolute. As we mentioned earlier, each part runs at least one container. It can have more than one, but one is a must. And it is also a standard practice. No, this is what makes part special. Kubernetes is designed with the fact in mind that containers die. The failure is natural. And so the restart policy of containers hosted by parts is set toe always by default, just like swamp. Perform orchestration on containers. High level objects off kubernetes perform orchestration on parts. Now, since we know a little bit about Pardes, let's get to work with them. 63. How to operate Kubernetes? Imperative vs Declarative: working with kubernetes is fun because it has two distinctive ways off accepting requests. In other words, there are two ways to manage objects in communities or to work with communities. The ways are imperative and declarative impaired away demands us to bride All sorts of specific information to kubernetes explicitly, for example, create something update something skilled, something. All of the's are specific commands where the action off creation or update is mentioned. Clearly, this means that we have more control over what do we want Cuban? It is to do. But it also means that we have to spend more time and efforts while doing this. On the other hand, declared away, let's kubernetes figure out things on its own by providing a simple file and asking it toe . Apply it. If the objects mentioned in the file don't exist, Kubernetes creates them, and if they do exist, it scales or updated system. Such an approach might sound absurd, but it becomes quite useful for batch processing where we can control multiple objects over single instruction. There are two ways to communicate Imperative Lee through files and took a month. Either Rican pride files with you, Hamel specs or commands with a bunch of flags. The more preferred way is using files, since it eases a troubleshooting later on, as mentioned earlier, there's only one way to communicate decoratively. It is true files here that input can be a file or a whole directory containing a set of file, which makes batch processing faster. In next demo, we'll see how toe work imperative, Lee and Declarative Lee. 64. Demo: Working with Pods: Create, analyse and delete (Imperative and Declarative): Now that we know what a part is and how it works, let's create one by ourselves. We have seen previously that there are two ways to create any object in communities embedded to and declarative. To make sure that we cover both of these ways, we help to terminals open side by side in one terminal will create apart Imperative Lee. Where is on the other terminal? We will create the part declarative Lee. We have these terminals side by side so that we can compare them once both of them are created. Let's start with the imperative one for creating apart imperative. Lee. We need to provide all of the specifications to either a command or a Yemen file. We will jewels yam of this time. Let's write a file called Imperative Barred Document. We're using Nano as our text editor, but you can use any text editor which you want. The basics of the Yemen file remains the same as Docker compose. Only difference would be the fields which would be indicated as key value pass. But that said, let's get started. Our first feel or first key value bad. His FBI version. This field is used to let Cuban it is no which version off a p. I is being used to create this object for more information about the A p A. Worsens and which version to use for which object you can follow the official kubernetes documentation by going to the falling link. Next up we helped kind kind specifies which kind or which type off object is to be created . Using this fight, we want to create a pard so are kind, feel or kind Key will have the value part. Next up, we have meta data. It does what its name suggests. It is data about the object which is going to be created. Typically, moderator would contain feels like names, labels and so on. The primary use off metadata in communities is for us, and the community is itself toe identify group and sort the parts. We want to name the part as impor I m. P. Dash Part and we want to give a label which says, at equals my app. You may have noticed that labels are a key value pair for now. Let's not dwell do deep into labels and let's go further. Next up we have spec field with stands for specifications. You can consider Specht as the most important field off this file. And why is that? Well, the reason is quite obvious. Specs are used to bride object configuration information, which means that here spec ful will provide information and configurations about the part itself, or first spec is containers. Unlike docker, containers, are just a specifications on a field off the parent object, which is part specific, ations me ready with objects, which means that different objects may have different specifications and different feels to provide them. Our next entry under the containers back is the name off the container. It is different from the name off the board. In theory, you can keep both of them same, but keeping them different makes things simpler. Next up we help image field image Feel describes the image which is going to be used to run this container by default. Kubernetes uses images from Docker Hub. But if we want to use other registries, we need to provide specific you are, but we'll get into that later. Next up we have command. This one is quite simple to comprehend. We are asking our container to run Shell Command and ICO, a string called Welcome to Contain, um, masterclass by civilian canvas and sleep for 60 seconds. We shall mention all of the required specifications to create the spot. Let's sail our file and exit the text editor Farrelly. We're also writing another file called declarative part dot Yemen. And as you can see, they're also providing similar feels as previous file in this one, such as epi, a wash in kind and metadata to distinguish this part from previous part. We're giving it a different name, but both off the parts will contain same label. Next up, we have specifications again. The name off the container is changing, but the image is there meaning the same, and this time we're asking it to print the same string, but sleep for 60 more seconds. Let's save this and exit as well. Let's go back to our left hand terminal and write the Command cube. CDL create hyphen f Imperative part, not Yemen. We are asking cubes ideal to create an object from this particular file and not to success off this command. We received the notification off. I am be hyphen part having been created, Let's go back to right hand side terminal. Unlike Imperative Way, we'll write the Command Cube CDL apply and mentioned the file using hyphens, F flag and the parties created as well. In this case, even if we had wanted to delete or scale the part, the command would have been the same kubernetes or cube. CDL would have figured it out by itself. What do we want to convey through the file, whereas in case off Imperative Command, we specifically had to tell communities to create an object. In any case, both or Imparato and declarative ports are created, so let's see if they're running or not. Right cube, CDL get Pardes. We will be using this command a lot in Future Nemo's. It gives a well arranged list off parts, along with a few more attributes like how many of the listed parts are ready? What is the status off each and report whether there was any restarts during the run time off the part? And since how long is the part running? We can see both imperative and declared a part. Having been created No, let's dig deeper into both off the parts by writing the command tube CDL describe Pardes, followed by the name off the part which in this case is I. M. P. Dashboard will also run the same command on the right hand side or big lead to terminal as well. Now we have descriptions off both off the parts so we can make a fair comparison. Let's start from the top. First of all, we have name off, both off the parts, which are unique. Then we can see that both of the parts are allotted to the same name. Space, which is the Fortney in space, are imperative. Part is should yield on No. Two, whereas declared apart. Issued, you'll unknown one. We also have that starting time stamps and their labels, which is common. As for the differences, the imperative part doesn't have any an additions. Where is the declarative part? Has quite a few off them. The reason behind that is Cube City L has used the configuration which were provided by us to create the part in case off imperative part where is in case of declared apart, it has used a specified point template and has just filled in our replaced the information which we have provided, moving further we have I p for both off the parts, but we'll get into that later. Next up, we help container information. As you can see, both of the container have different names and different container ID's, but the container image and the image ideas are seen. We also have the command, which is going to be executed by both of the containers, and it has the slight difference. As we had mentioned moving further, we have the state off the container, which is running in both off. The case is, and we also have the starting time stamp off the container, which means that this is the point where the container went from created state to start at ST on first container, which is imperative, or I am Beacon. Dana has already exited or terminated because it was completed, whereas seem is not the case with the other one, because the sleeping period was a bit longer. Next up we help amount and volume information, but we don't need to dwell so deep into that right now. We will look into them when we study volumes for kubernetes. My personal favorite part about the description off the containers is the events. This is different from how we used to inspect our containers. Using Docker Kubernetes gives us a short and sweet, summery off events which were really important. We can see that both off the containers went through a bunch off events, including there, she Dooling, pulling an image, containers having been created and finally started. So this is how we can create and distinguish imperator and declarative parts. 65. Life-cycle of a Pod: just like congeners parts have their life cycles as well. First of all, part is in the pending state, it means it's confrontations are approved by cube controller, manager and MPs over, but it is yet to be should yield on a node. Once it gets green signal from Cuba lit and distribute, it is in the running state. It means at least one off the parts container is definitely running. Sometimes the containers are programmed to exit after performing a certain task. In such case, the part goes to succeeded state where all off its containers have exited successfully. Or you can say gracefully if one or more containers failed in between, our container dies due to being out of memory. What goes toe failed state from running state. It can be the shuttle after troubleshooting, and that gives it goes back depending and then running state. Lastly, we helped unknown state where the part is not running, but the reason for it is not determined yet, and this is the life cycle off the part 66. Demo: Managing Pod's lifespan with Life-cycle Handlers: Kubernetes provides container lifecycle hooks to trigger commands on container life cycle events. If we recall, Container Lifecycle had five stages created, running, paused, stopped and deleted out of these five cubes. Ideal provides lifecycle hooks for two off the states which are created and stopped. Let's explore both off these using life cycle part, not jahmal file. This is a standard engine export named Life Steve. I see part and under the container spect We helped to life cycle hooks called Post Start and pre stop these hooks. Functionality is pretty much as their names suggest. Both off them have handlers attached to them, which are executable commands. We'll start hooks. Handler. Will ICO welcome toe a file called poor Start MSG and it will trigger after the convenor will enter the created state. This is the state where resources for read right lier result, but the container is not running yet. In other words, the latest CMD our entry point instruction is yet to be executed. The hook works in currently with the parts container creation process, which means if for some reason the handler off the hook hangs, are feels to execute, the part will remain in container created state and won't go into the running state to brief things up. First of all, the container will be created in the post art hook will be handled and the message will be printed. And then the container will start running by executing CMD or Entry Point Command. A general use off this hook is for better debugging, just like try and catch clause in the programming. But it also brings the burden off stalling the container if the hook doesn't get handled properly. So if part events and logs are sufficient for your debugging, you might want to skip using this hook. Lastly, we helped free Stop Hook, which triggers before termination off the container were simply quitting the engine X process before terminating the container. But if you want to strongly very fight this hook, you can apply signal to one of the containers crucial processes, and you would find the container exited with a respective record. Let's exit the file and create the board 30 seconds down, and the part is ready. I know we have sold the benefits off containers a lot, but it is always amusing to see such a level off managed isolation being created with such less efforts and within such a short time. Now let's execute the part with cubes. Ideal exact Amman and run Bash on it. Get the file Post Art Nemazee and Bingo. The hook was executed successfully. The message is loud and clear. Well, not that loud, but it is quite clear. In the next lecture we will see how to replace Convene a CMD command. 67. Demo: Adding Container's Command and Arguments to Pods: Let's start this demo by pending a list off available parts. Using cubes it, you'll get parts. We only have one part lifecycle part, which is from the previous demo, because we have deleted the imperative and declared a parts. Don't worry, we will go through how to relieve parts as well. But for now, let's go to the file command pard Darty Amel. The yamma file looks pretty similar compared toa previous two demos as well. So let's focus on the changes here. First of all, the names off the part and container have changed. The part is named CMD hyphen part, and the container is named CMD hyphen. Container makes sense. Then in spec field after name and image off the container we have Common Field Command field indicates the entry point command in the docker image. If we do not provide any command or value to the command field, Kubernetes uses default entry point off Docker image, but we can change it by providing command and its arguments. Instead of keeping the container up by running a loop off bash Common. We're just asking it to print a couple off environment variables, so the command is sprint envy and its arguments are host. Name and kubernetes underscore poor. You may notice that the command and arguments are written in between double inverted commas and their encapsulated by square brackets. The arguments are separated by a comma. Let's exit the file and make apart. Then cubes. It'll create hyphen f Come on, hyphen part, not Yemen. The part should have been created. Let's test it with cubes. Idiom. Get parts. Here we go. But check it out. This part is not in running state. It is in the completed state. The reason is we have not provided any endless loop command like Bash. We had just asked it to print a couple of environment variables, which it did successfully within a few milliseconds. Maybe so. By the time we run the Command Cube City will get parts. The container had already finished its task, and the part was in completed state. Let's have a description off this part using cubes. Ideal. Describe part CMD part here is our long, well structured description. I'm pretty sure you can comprehend most of the parts easily, so let's directly jump to the command and arguments section. The command is the same what we had provided, which is sprint envy and its arguments are host name and kubernetes sport. Now, if we jump toe events, we can also see that the container had started 35 seconds ago, whereas it finished 34 seconds ago. So within one second all of the commands were performed. We can also verify this by looking at the log off the part simply right cubes, ideologues. And then the pardoning, which is CMD hyphen part. And there you go. We have our host name on Kubernetes Sport both printed. 68. Demo: Configuring Container's Environment Variables with Pods: Hello, everyone. As usual, let's start this. The move with a list off available parts we helped to parts CMD and Life Seaway seaports. One of them is completed and the other one is still running. Now let's open the yamma file environment hyphen, part Gargamel with nano. Again, the family is pretty similar to previous demos, so we should focus on the changes The names off part and container R E n v hyphen part and e n'dri hyphen container. Just like our usual naming convention. If you take a look at the image, we have not simply provided a name with label. We held the whole part on you R l off the image. We have done this because this time we don't want to use doctors Image Registry. We want to use Google Container Registry, which is another place to find container images. In this demo, we're using one off the sample Google images called node. Hello. This note hello is more or less like hello world off Docker image industry and this one is built on top off Alpine base image. With that said, let's get to the cream off this demo, which is E n V. R E N V Field, which is used to provide environment variables to the container. If the container does not have the environment variables provided within this field, it adds it along with its default environment variables. And if unenlightened mint valuable with the same name has already been set up by the docker image, the running container replaces it by the values we provide. So take this example. Let's say we have a docker image and we provided environment variables E B and C equals B Q and R respectively. If we're providing the same environment variables with different values using kubernetes Yamil file just like Docker, only the running container will reflect the changed value, which means that the values will reflect on a copy off the image and the original image will remain unchanged. So the original images environment variables would still be ABC equals becue and are but a copy off. It will have it as a B C equals S de and you or anything else which we provide. In this case, we're providing to environment, valuables, part greeting and part favorable, and their values are suitable to the name as well. Bar greeting his welcome and part farewell. ISS. We don't want you to go with his hat. Smiley. Who that said, Let's save and exit off I'll it's create the part. Using cubes Ideal create hyphen f common. Let's see if it is running or not. The part seems to be running. Now. Let's get a description off this part using cube CDL describe pard, followed by its name, E N V hyphen. Part here is the description off this part. Let's straightaway jump to the environment section and we help to entries in this field board greeting and part farewell exactly the ones which we have set up. Let's clear it and execute this part using cube CTL exact hyphen I t followed by part name , followed by the command, which we want to run. You may notice that disk a man is pretty similar to what doctor has provided for executing a container as well. Now we're in root directory for container. Let's bring our environment variables and there we go. We have a long list off environment variables. This answers more than one questions. First of all, what about the environment variables that we had set up well here they are both poured. Greeting and port farewell are present. And second, when we executed the container, why did we get to root at envy? Part not envy container? Well, the reason is we're still in the root directory off the container itself. But the host name is E N V. Part, which you can see in this environment. Variable. With that out of the way, let's exit this container and get back to our dominance. 69. Labels, Selectors and Namespaces: This might be the beginning when you start feeling that Kubernetes digs deeper into orchestration compared to swarm, let's say we have four parts named Think light pink, dark blue light and blue dark. We can label them to provide a logical grouping off parts here. Both light and dark pink parts are labeled B for pink, and the rest on labeled B for blue label is attack. It is a meta data, which allows us to group or part logically for efficient sorting. Labels are also available with doctor, but they're pretty much useless if we can't do much with them to complete the functionalities off Labour's. We have selectors. We can use electors to search for parts with one or more particular livers. Here we want bars with label P. So all we get a two off the pink parts. We can play around with labels and selectors for all sorts of things. You can also, how more elaborate labels and selectors to pick a particular part, like lightning. No, you may wonder we can have two parts with same labels, but can we help to parts with same name? Straight up answer is no. But there's a catch we can help to different name spaces, Just like programming named Species in Communities is also a way to isolate Pardes. Logically or willingly. It means we can have two parts with the same name in two different name spaces in next Demo will flee with labels and selectors. 70. Demo: Working with Namespaces: name species are a logical partition mechanism off communities, which allows its cluster to be used by multiple users, themes off users or a single user with multiple application without any worries or concerns off undesired interactions. Each user team off user on application may exist within its name space, isolated from every other user off the cluster and operating as if it was the sole user off their cluster. With that out of the way, let's see how many names spaces do we have on our cluster? It seems that we helped three names basis at this point off time. Mind Well, none of these names spaces are created by us. These are the name space is provided by kubernetes, and if you look at their age, all of them are up for 80 minutes. It is about the time when we first bootstrapped or cluster. We have default Q. Public on Cube system named Spaces. The fort, as its name suggests, is that the fort name space for every part that we create within Kubernetes Cube system is used by communities itself. So isolate its parts from default. R Q. Public name spaces that site one of our most standard commands just cubes. It'll get parts and we get what we had expected. Three parts, which we had done in previous demos. Now let's add a twist to it. Provide a flag called all name spaces and see if we get any more parts. And we have a long list off parts. Which means that for all this time, Kubernetes was not just turning 12 or three, but it wasn't all of the's parts. First, let's see the parts within the default name space. They are the same, which we had created CMD envy and life. See, I see parts, which means that regional parts we had created fell straight into the 14 space, and all the other parts are in cube system name space. These parts are implementations off different blocks off kubernetes architecture. If you remember, we have already studied eight CD Cube AP a Server Cube controller manager Cubes Should Ula and Q Proxy. We had also installed Real Net, which is the part network for Kubernetes Cluster, and all of these bards are running under cube system name space, so they're isolated from whatever we are doing on our default name space. Let's create a new name, Space with Cube. CTL. Create name space command, followed by the name off the name space which we want to create, which in this case, is my hyphen name space. I know names, pieces created. No. Let's create the same imperative part which we had created in our first demo, but this time put it in my name space instead of default with hyphen and flak. Let's get our parts. As you can see the list off parts under default, name space is still unchanged. We're having the same old three parts on Imperative Part is nowhere visible. Let's get the part from my name space and there we go. We have our imperative part running for almost 20 seconds, and we can always verify it by listing out parts off all of the names spaces, check out the last entry. It is imperative part 71. Demo: Pod Resource management: When you specify apart, you can optionally specify home at CPU and memory or ram each container needs. When containers held resource requests specified the shed. Ula can make better decisions about which notes to place parts on, and then containers have their limits. Specified. Containers can make sure that the notes don't crash. Let's start out with getting a list off parts. Let's open the file, the source hyphen part, not Gammel. And there we go. The file seems larger than previous party Emma's that we have used, but don't worry. Instead, off one, we help to contain us. This time one is my SQL database Can Dana, whereas the other is front and WordPress container. The party's name is front and first of all, has to go through the obvious things like name off the containers, images being used, environment variables set up and metadata off the bod. Once all of those are out of the way, we have resource is feel in both off the containers. This field is used to provide limits off resource poor container and request per container resources are memory and CPU. As you can see, we have provided pretty little amount of resources to both off the containers, where resource limit is 1 28 megabytes on request limit is just 64 megabytes. Let's see what happens when we try to create such a part. Let's save an exact this file. As usual, Run Cube CDL create hyphen F common, followed by the fine name on the part is created. Let's list the parts out. It seems like the part is still in the container creation state. Let's give it a bit off time. Well, it seems like the containers are still being created. Or, in other words, they have not been created yet. Why is that? Let's take a look at the description a bit. All right, so the part is not ready because the containers are still being created. As you can see, our part is following resource limitations quite strictly. Let's list the parts again. Come on. Only one out of two containers is ready, and the part is in Crash loop. Back off status. Let's see. What's the problem here when we Don Cube Citadel described command again, we can clearly see that the state off database container is terminated, and the reason for that is or M guilt, which stands for out, off memory killed troubleshooting. This isn't much difficult. It clearly suggests that the resource location limits that we have provided are just not sufficient enough for this container to run. But it's on the other hand, the WordPress container is running properly. Even when we look at the events, all of the Aaron's regarding WordPress containers seems to help on well. But in case off my sequel database container, the image was pulled successfully. But the container could not start because the resource is were just not enough. And if you notice both of these containers a shield on the same note because they are in the same part. So when we're learning more than one containers in apart, they will be shield on the same note. But that's not distract from our main objective. We need to figure out a way to make sure that what off these containers are running smoothly in the spot. For now, let's delete our front and part using cube CTL Delete Parts Command, followed by the name off the part. There can be one or more parts that we want to delete, but in this case, we just want to relate front end, and it seems to be believed it. Let's get back to the gamma file off forefront and part and increase the resource limits for our containers instead of 128 MB. We're changing it to one gigabyte. And while we're at it, let's do the same with WordPress Container as well. That sealed the file and exit Nano, and let's try to create the part again. And when we list the board Mullah it didn't even take 11 seconds and are part along with both of its containers is in running state. When we describe it using Cube City and describe, we can clearly see that the resource limits have changed. All of the evidence regarding both of the containers off our part went smoothly. 72. Kubernetes Controllers | Concept and Types: controllers are a type off workload objects just like parts. The controller acts as a parent or supervisory object toe apart, and it manages parts behaviour in certain ways. How controller will deal with the part depends on which controller it is, for example, or replicas that will create multiple replicas off a running part. A deployment controller may perform replication updates off service exposure on parts state . Full sets will arrange the order off execution for parts and will make sure that none off the parts break the cube. Whereas jobs will create parts that will terminate after execution in the next, lectures will work with different controllers and understand them. 73. Introduction to Replicasets: Let's understand control objects. One by one, we will start with replica sets, replica sets on higher unit off orchestration compared to parts, which means they will supervise the parts. The purpose is pretty obvious. As mentioned earlier. This gale parts are they managed the number off replicas off apart. We can increase or decrease the number of replicas off. Apart. Using replica set parts are given labels and replicas. It's are given selectors to keep track off which parts to supervise. It is also possible to provide part definition along with replica sets. It would mean that creation off those parts will also be managed by replica set. If you do so, you need to provide part specs as part template to the Yemen file off Replica set. While they're quite useful, standard practice doesn't involve using replica sets directly there used under supervision off deployment, which we will learn soon enough. 74. Demo: Working with Replicasets: Let's start out as usual by getting a list off parts. These are the parts from our previous section. Since we don't need any off them right now, let's delete all of them and we're back to square one. Let's open our file. Replica hyphen, part Dottie Amel Using Nano. This is a Yemen file off replica sets. Let's pass it one by one. First off, all we have FDA worsen. And if you notice or epi, a version is different from what we used to use with parts. Parts usedto have epi. A version we won where his replica sets are using FDA wasn app slash re one. Next, we help kind. Obviously, since we're creating replica sets, our object kind is, Replica said. Next is metadata. We have name and labels were naming or replica set as replicas that hyphen guestbook and labels are apt guestbook and tire Front end thes labels apply to the replica set itself. It does not mean that the parts created under the Sablikova said will carry on the same label. Next up, we have spec field, just like apart Zamel file. Even in case off Replica said Speck is the most important field. Our first spec is replicas or number off replicas, which in this case is three, which means that this replica said, will create three parts. If you provide five, it will create five parts and if you provide 50 it will create 50 parts. If you're nodes, have enough resource is next up. We have selectors. Selectors are mechanisms used by Replica said to determine which parts will fall under the separate asset. We have two ways to provide the selectors, which is match labels or match expressions. Under match labels, we have provided a key value tire front end, which means that every part having the label tired equals front end will directly fall under the replica set roided that they're under the same name space and are a match expression Selector says that the parts having the key Tyr and its value being fronton will fall into the CEP. LICA said. Essentially both of these selectors are doing same thing in this Yamil, but we have just written them out so that you can know that terror two ways to mention your selectors. Next up we helped template. This template is a parts template, just like we have discussed earlier in the theory will provide data about the parts, which will be created under discipline cassette. Our replica said. We'll use this template to create the number off parts that are mentioned under the replica aspect. Let's start with mental later off parts. We have not provided name on names here, so Replica set will title the parts by itself. But we do have labels, and they're quite essential. And the reason is thes labels will make sure that the parts match the condition off the replica sets selector. Next up, we have part specifications where we are straight up mentioning the containers. The container name will be PHP readers, and the image will be guest book front end version three from Google's container repository . We have also mentioned the ports information, which means that if we exposed these containers containers Port 80 will be mapped with hosts. Ex wives that port. Now let's save and exit this file. It's time to create our replica set using cube CTL create hyphen F command. We're creating or replica set Imperative Lee, but you can create it, declare Italy as well. Now let's see how many parts do we have using cubes Ideal get parts. And there you go. We created three parts simultaneously using a replica set. The part names are given automatically by communities, and all of them were created 6.5 minutes ago. Now let's check out the description off one off the parts to see if the's parts are different from the parts we had created using individual Yamil files, starting with name and name space. We don't have many differences, apart from the fact that we have a new field this time called Controlled by this means that these parts have a parent object which is controlling them. And in this case, this object is replica set guestbook, which we have just created. Apart from that, most of the description is similar to a regular part, just like porn. We can also list out our replica sets using Cube see deal get RS RS is the abbreviation off replica sets, and there we go. We have one replica set, and it says that this replica said has three desired parts, and it is having three currently ready parts, which means that the replica set is working just perfectly. Now let's check out the description off replica set guestbook using cube CTL Paris replica said guestbook. Till now we have only used cubes. It'll describe command with parts, but now it gives a general format off this command. So Gipsy Deal describe is followed by the type off the object that we want to describe and followed by the name off the object, which in this case is a replica, said guestbook. The description is shorter compared to part. We held generate information like metadata, part status board description and three events where each of the event indicates creation off one off the three replica set parts. In fact, there is another aspect to replica set. Let's try to delete one off the three parts that we have here using cube CDL delete parts, followed by the name off this part and the bodies deleted. If we tried to find a list off parts, now, will it help two parts or three parts? Let's check out Well, it has three parts, the one which we had deleted. It's gone forever, but are replicas had has spun up another part with a new name but same configuration, and you can see that the newest part is earning for 10 seconds. It means that even if the parts which are under this replica said Die crash are are deleted replicas, it will just spin up new parts by itself, which seems us a lot off efforts. 75. Introduction to Deployments: deployments stand even higher than replica sets in terms off supervisor In nature, it means deployments are capable off creating their own replica sets, which in turn will create the parts accordingly. Deployments are kind of all round the objects, which can be used for a lot of things, like creating parts, managing replicas, rolling updates on parts, exposing parts, etcetera, just like replica sets. They also use labels and selectors for port identification. By now, you may have started realizing that labels are a lot more than mere moderator for parts. All of these aspects make deployments a perfect choice for hosting stateless application, where order off part creation may not be that much crucial. And as mentioned multiple times, they're most widely used container orchestration objects in Next Demo will be working with deployments. 76. Demo: Working with Deployments: to avoid any confusion. Let's start with the list off the parts by Running Cube. CDL Get parts we help three parts from a previous replica set. Let them be where they are and let's open our deployment. Dottie Amel file. Let's start from the top. Just like replica sets. Deployments also use epi, a version ab slash we've on their kind. Object type is obviously deployment. We have given it the name off reply hyphen, Engine X. Let's go to the Specs field were using match label strategy as selectors, and we will be looking for parts with label. App equals Engine X. Deployments are higher level orchestration objects compared to replica sets. So if we are creating a deployment, a deployment itself is capable off, creating the replica set it needs. By providing replicas field. We can instruct the resultant replica to create a certain number of parts. No, let's go to the party template and fill out the data. We will provide the labels app equals engine X tow. Avoid any conflicts and we will provide the container information which include name, which is deployed container and container image, which is inter next, 1.7 point nine, we're back to using images from doctors registry because, well, they are just simple, too. Right after mentioning the board, let's save and exit this file as usual. That's right. Cubes ideal. Create hyphen air, followed by the name off The file, which is deployment dot mammal on no deployment is created. Let's have a list of parts again, and we helped to new parts here. The top two parts on engine X parts created by deployment deployed hyphen Engine X First off fall apart from the label which we have provided, which is apt equals engine X. The part contains another label, which is for the part template it is using. This label has been provided by kubernetes itself. Next up, we helped controlled by. As you can see, this part is not directly controlled by a deployment. It is controlled by a replica set, which is controlled by deployment. Next, we help container details, including image, name, image, I D and status, which is ready. We also have the normal events off, images being pulled and the container being created etcetera. Let's clear out or terminal and describe or deployment. The deployment description provides a lot of details, starting with the obvious ones like name, name, spaces and labels. We have description about replica set, which indicates that this replica set is supposed to keep two parts up and running. Below that, we have strategy type. You might be wondering what kind of strategy well we're talking about updates strategy off the deployment. One of the most well known use cases off Deployments is without taking it down. He had the strategy type is rolling update. If you're wondering what ruling upgrade strategy means, just go a couple of steps below and we held rolling updates. Strategy D days it says 25% max available and 25% max Search. It means that when this deployment is being updated, only 25% off, its total parts can be unavailable, and the cluster is only allowed to deploy 25% off extra parts while a bleeding the deployment Take this example. It's the deployment has four parts, and we're tryingto update it 10 25% max. Availability means the deployment needs to keep at least three parts off and running all the time, and 25% max. Surge means that the deployment can only create upto five parts at max. Going below. We have details about part template, which is quite common. But if you go even below that, we have the name off the replica set which has been created under this deployment. And if you look at the events, only one event is directly linked to the deployment, which is scaling up, the replica said Toe rest. All off, the events regarding part and containers are handled by either replica sets, which worked under the deployment or parts which are going by replica sets. 77. Introduction to Jobs: moving on from deployments. We have jobs. You might have guessed that they're also higher level units than parts. Well, because almost every controller is higher than parts to define them. Simply jobs mean parts whose containers won't run for eternity once the purpose is fulfilled. The exit and more technical terms the commands provided to the containers are time and integration limited. Once they get executed, container gracefully stops and gives the resource is back to the host. If you list the parts which are maintained by jobs, them not being under running state will not be much of a big issue. They will remain in completed state once the container's exit, and it is totally fine. Jobs are used for batch or parallel processing. Grand jobs, which are periodic repetitive jobs, are used for checks or should yield reiterations off a certain task. Tasks like checking databases are pending. Updated soccer school every five minutes, etcetera, and next demo will be working with jobs 78. Demo: Working with Jobs: we have five parts from previous two lectures. Let's open jobs dot Yamil file Using Nano, as we have seen in the theory, jobs are run to completion type off orchestration objects, which means that the command that we provide under the containers back won't just be an endless loop command Going from the top, we have a different AP. A version compared toa replica sets and deployment, which is batch slash we've on our object kind is job name off. The job is a job bike we hear named it in a way, because this job is going to print the value off by with 2000 decimal points going further , we help part them blades where under the spec field containers. Details include a name off the container, which is job can Dina image, which is doctor registries, full image and command. In this command. We're running a full script toe print. The value off by in 2000 decimal points and another aspect or another specification Off the job is its back off limit. Since job is a rental completion type of organism, we can't have it lingering around forever. This job will try to make sure that the command off this container works, but if it doesn't for some reason, it's the container fails. 10 Job will try food. Repetitive attempts off running the container After four attempts. If the container is not running, the job will back off and it will feed. Having said that, that's even exit the file. Let's create the job using cubes. Ideal Create hyphen F, and our job is created. Let's verify it by getting a list of parts. And there we have our job bipod, which is eight seconds old. If we describe the container, we can see that it is controlled by the job called Job by on its status is succeeded, which is different from the other parts that we have seen recently. Going further, we can also get a list off jobs, and the job can also be described using Cubes et al. Described jobs. Job bite, just like regular orchestration objects or job also has description fields like name, name, space selectors label. It also has starting times, time and completion times, time and the duration for with the job was running. Finally, we held the parts status where one part is succeeded, which was over desire State and zero have failed. It has only one event off successful creation off the part since we had used the command toe. Bring the value off by Let's see if the output is available using logs off the part created by the job run the Command Cube Cdn logs followed by the pardoning and there we go try to memorize this value. 79. Introduction to Services and Service Types: all right. Huge disclaimer. Since all of you have already studied Docker swarm service will be a term which are already family of it. But both off thes services are different in case of swarm service acted like a deployment where you can declare all of the desired objects and manageable convert them into tasks. But here in communities, services are merely networking objects for parts, since both of the forms Joe's to have different interpretation for the same term, it becomes our job not to get confused around it, but that out of the way, let's dig deep into community services. Firstly, services are also objects just like parts or controllers, but they fall under the category of connectivity to understand how services work. Let's stick to dummy parts blue, dark and pink dark. We want this parts to be able to talk Dr External Word or simply to each other. Then services are connectivity objects, which so as a stack off network configurations, which can allow parts to be able to communicate just like deployments or replica sets. Services also use labels and selectors to determine which parts will be connected to them. Our parts how label DB NDP was. Our service has a selector looking for D. B. So dark pink won't affiliate with the service. Dark blues connectivity will now be handled by this service, and it can potentially talk to the outside world as well. Remember the word potentially talk? Not necessarily. Let's brief up. The services forced to points are already familiar to you, but they're important to list out. You might be surprised that Kubernetes itself uses thes services to perform all sorts off in cluster and global communications. So this is generally pride cluster I pito each part which allows it to talk within cluster . But if we choose to abstain from such a practice, we can create a headless service. And finally, Kubernetes also provides native support for multiple services and cloud load balancers. We recently mentioned that services can make parts potentially talk to outside world, but why potentially well with the service is also how types first off them is cluster I P, which only exposes the service within the cluster. It means that outside world can access it, but parts within the cluster connected to the service can talk to each other. Second type is Northport, which exposes the service on the external eyepiece off all of the Nords off the cluster, including master. This will by default also create a cluster I p where the North port will be doubted eventually if we have to find a cloud roided load balancer, We can use load balancer type service which doesn't only expose it all notes, but also prides dedicated external eyepiece to the parts connected to the service. And finally we have external EEM which allows us to use DNS address to communicate to parts connected to the service. In next lecture will take a look at class type E and Northport services, whereas we'll visit load balancers when we don't communities on a managed cloud provision. 80. Demo: Working with ClusterIP services: after performing the cascaded dilation in last section or cluster seems to be pretty neat and clean. We have no parts, no replica sets, no deployments are no jobs lingering around with that said Let's open the file. Deploy engine Next rt Amell This is a regular Gamel file for a deployment called Deploy Hyphen Engine X, which is going to run a couple of containers with Inger next image. Let's not go too deep into that because I'm pretty sure you already understand it by now and exit it now It's open. So hyphen engine extort. Jahmal, this is something new. This is the Yemen file off a community service as always, Starting from the top, we help FBI version just like replica set for pard were using a PC version We won. The object kind is service, so his name is service hyphen in genetics and its label is run equals my engine. ICS going forward with the specs off the service. We have ports information. The port information suggests that containers port 80 is supposed to be exposed using this service. And finally we have selector, as we have seen in the theory, the service will use the selector toe identify which bards to expose. And here the selector is run equals my engine X, which also happens to be the label off the parts being created by our deployment. Let's save an exit this file. Let's create both our deployment and service or deployment is ready in both or parts are up and running. Now let's get a list off our services. We helped to services lying around here. One is created by kubernetes itself and other itself. Engine X, which is created by us almost 25 seconds ago. If you notice the type off boat off the services is Lester I. P. And if you remember, Cluster I P allows containers to talk within the cluster, which means that engine X country nous off. Also engine Next deployment are exposed within the cluster on Port 80 and we're accessing community services within the cluster using Port 4 43 Let's describe over service using cubes et al describe s we see, which is the abbreviation off services. So Engine X, which is the name off the service, the description is pretty short. We have basic information like name, names, based label annotation and selector, which is run equals my engine X. Then we helped a type off the service, which is bluster i p. Next we help Target Board, which is 80 on TCP protocols, and we also have endpoints for both of our containers. If you remember, Dr Sessions and points are the mechanism to enable communication with docker containers. We have said that our containers are accessible within the cluster on our containers are exposed within the cluster, which means that home page off engine next Web server should be hosted on these eye peas, but the scope should be limited to our cluster. Well, let's tried my running girl Command for load by http I p off our service, followed by a Kahlan and the port, which is 80. And there we go. This is two html format off engine. Next Web servers welcome home page, which means that our service is up and running 81. Demo: Working with NodePort Services: We don't need to create a separate deployment for this demo. We'll just use the one we have created in previous demo. Let's listed out once or deployment is deployed Engine X, which has two replicas off engine X webs over to work with Northport service. First, let's delete the service that we have created previously, which is so engine X it was a cluster. I'd be type of service. Now let's open the self injure next door. Thiemann again. As you can see, it is different from what it used to look like in previous demo kind epi A. Wasn't metadata, etcetera artist seem. But the service type this time is Northport for http. We have provided Port 8080 Whereas for https we have provided Port 4 43 It is having the same selector as previous. One run equals my engine X. Let's say the file and exit it. Create the service using cube CTL hyphen f create and our services created. When we get a list off our services, we help. So Engine X, which has been created almost 10 seconds ago and this time it is a note poor type of service. The ports field indicate that container sport 8080 and 4 43 on. Exposed to port 30 99 32105 respectively for http and https connections. The Note. Port service Selves as the external entry point. Incoming request for your app. The Assign Note Board. It's publicly exposed in the queue proxy, setting off each worker north in the cluster. It means that our service so Engine X is live on external i p. Ease off all of the north of four cluster, and it also has a cluster I P Service created for itself so that containers can talk to each other within the cluster as well. Using this, I p When we describe the service, you can see that apart from port and Target board, we also have new information, which is Northport. And since we have exposed to ports off our part or container, we helped to different Northport or public port exposed on our host machines. The reason is when we expose our app by creating a community service off type Northport, a note port in the range off 30,000 to 32,767 and an internal cluster i P address is assigned to the service test. It lets no down the external i p off one of all notes were taking note. One. Here, Who's I? B is $35.200 dollars. 215 not 1 39 Let's girl it just like last time. But this time, instead of using cluster, I'd be We're using the external AP off our note, and here we go. We get the welcome page off engine X Web server. You can also try it on a Web browser, and it is serving in genetics, which means that not only from this instance Artists Web browser from any Web browser in the world, you can use the combination off the external I p off your Nord and exposed port. And what you will get is the content which you are hosting on your Web server. So we have finally exposed or a Web server globally 82. Introduction to Storage in Kubernetes: first ofall storage objects are just another type off objects and idea behind storage objects is also similar to that off darker there used to create backup off important information, which is created during runtime off the containers. Then you might be wondering, Why should we study storage again? Well, Doctor had four types off storage provisions, volumes, temper, fess bind mounts and third party plug ins. Kubernetes is somewhat different, apart from supporting volume creation on host itself, it supports AWS elastic blocks, toe supports, azure disk and as its absolute Google Cloud plan. It supported open stacks to support SVM. Their support storage OS Wald supports Boat works. The list is even bigger than this off course. We don't need to learn each and every one off these options just availability off. So many options is enough off a reason to learn the working off a binary storage objects. And this was just about persistent volumes. Kubernetes also has temporary volumes and projected volumes. Apart from this overwhelming availability off options, there is another key difference as well. If you remember the nature off storage in Docker, it went something like this. Containers generate data wire applications, volumes stored the data as a backup, and when we delete the container, the volume would still remain as it waas. Over time, we may have too many dangling volumes, which can send us into shortage of storage or unwanted bills. In the end, we help to delete them manually. That can be a daunting task, but is in case of kubernetes. Each volume attached to a part has a bound lifespan, which is exactly as much as the party itself. This way, even if the container dies, the part will stay alive and restart the container so volume won't disappear. But if we delete the part itself, the volume will vanish as well. That is pretty helpful. In next demo, we will learn how to use storage objects off Kubernetes. 83. Demo: Mounting Volume to a Pod: It's always great to start a section with a clean slate, so we don't have any parts on any deployments lying around. Let's treat a new part with or Yemen file Really sport, not Hamel. But before that, let's go into it. This is possibly the smallest Yemen file we have seen in this course. It's pretty simple. We just have basic information required to create really state of a sport. Let's exhibit. Let's create the part and check if it has been created. All right, we're good to go. Let's execute this part using Cube, CDL exit I T and run Bash Command. Here we are into the root off for container and our peed ability or present working directory is data. What should we do about it? Well, let's write an intro about release itself. ICO. This is an open source in memory. There are such a stored used as database. Let's store it in. Radio intro dot txt Now let's update this container using apt get update. You might be wondering this is a really state of his container, right? How can we learn? Apt Get update. Well, the base image off this container happens to be Debian. So using app get a blade is totally okay and no update is complete. In a nutshell, we have made some modifications to the container running inside our really sport. Now let's kill this ball. Use PS ox to find out the list off running processes. Let's create a release process itself, which will end up in the termination off the container. And there we go. We are out of for container because it has been killed. Let's get a list off parts again. Well, our part is up and running, but Container has had one restart, which is due to us killing the readers process. So we went inside the container. We did some changes. We killed the container. So what about the changes which we had made? The easiest way to check is to get inside the container again and see if the files that we had created are still there. Let's do it. It seems like our data directory is empty. So the release into order txt file that we had created has vanished. And this exactly what data loss means To avoid it, we will use a simple empty directory volume. Let's exit the container and go back to our party. Amel file. Apart from container image and name information, we have a few more lines here. First, let's go to the volumes line. So we have declared a new volume called Reedus Volume and it's type. It's empty directory, and we have also declared amount for that volume, which indicates that the mountain path is data. Let's understand the functions. The wall you feel declares to communities that a new volume off empty directory type needs to be created, and it needs to be named as release volume. And while you mount, make sure that our Readers Containers Data Directory is mounted to the empty directory volume just for the sake of novelty. We're naming or part as Read this wall and our container as a release wall container. Let's save and exit the file as usual. Let's create the part and check if it's created properly. No, let's have a description off this part to see if it has anything different from the parts that we had created in earlier sections. It does have a difference. The description is now also populated with volumes information. Our release volume is mentioned here, which is an empty directory type and who entities is kind enough to let us know that an empty directory is a temporary directory that shares a parts lifetime, just like we had discussed in the theory. And we also held amount information in Containers Field to check if this volume is working properly, let's follow the steps, which we had done to the part without volume and see if there are any changes. Once we kill the container. Well, we helped kill the container. So next time when we get the list of parts both off, our parts should have one restart. Let's execute it again and see if Data Directory has any content within it. Unless and bingo, we have our readers in total txt intact. It's scattered to see if it's the same file. Well, it ISS or volumes are working properly. 84. Demo: Mounting Projected Volume to a Pod | Secrets: we helped to parts from all last lecture. Let's leave them untouched now. Let's create to temporary files, user name and password dot txt and fill them in with required credentials. Use ICO hyphen End, which stands for new and right Edmund in a temporary file called user name dot txt. Do the same with password. You can use any string you like for the password. We're keeping this absurdly difficult to pronounce one. No, let's create secrets out of thes files. Secrets are a type of projected volumes, which are different from persistent volumes because taken mountain multiple sources in a single directory in current wasn off kubernetes secrets, conflict maps, service account tokens. All are projected volumes. Here we are working with secrets. With that said, Let's create a secret with cubes. Ideal creates secret, followed by the secret type, which in this case is generic secret name, which is user and source off. The secret, which is from a file or a temporary file called user name Doc Txt has create another secret called Be Instability, which stands for password from password dot txt. No, let's list all secrets out. We helped three secrets lying around here a couple of them user and peace stability on the ones which we just created a few seconds ago. Where is deformed token? DP Duty F is created almost 25 hours ago, and it has three different sources. If you look at the type off these secrets, the ones which recreated are opaque, which means that the data inside the secret won't be visible even if we describe it. Whereas the default token secret is a service account token, which Cuban it is uses for its part creation purposes. Let's describe all secrets, and as you can see, we only get the meta data about the secret, not the data itself. We can see that the file is 13 bites large, and the secret is derived from the source password dot txt. But we can't see what's inside the file. So if we delete password dot txt now, which it will be dilated sooner or later, since it's a temporary file or password will be protected, you may wonder, what if we want to see what's inside the secret? Well, let's mount it with a Bard Open Project ID volume document file. This is a general busy box pot with secrets mounted as project it volumes. If you go into the volumes field, you can see that the name off the volume is test volume and the type is projected in the projected volume. The sources aren't the two secrets which we have created, and both of these secrets are amount to a common part called Project and volume on Busy box Container. Let's say you and exit the file now You can guess what we are going to do. We're going to create the part and check if it has been created just like that. That's execute own busy box and run shell on it. We remember correctly we had mounted our secrets to a directory cord project and volume. So let's see what's inside the directory. Here we go. Both of the files are available. Let's cat one off the fires, say password dot txt. And it's the same password which we had entered, which means that the sensitive information, such as user name and password, are safe gap in the sandbox off a container and can be shipped along with it instead of packaging it in some other archival format. It is simpler and secure 85. Demo: Good old MySQL Wordpress combination with Kubernetes: let's create a comprehensive application, which would demonstrate the uses off both services and volumes. First of all, let's start off by creating a secret called my sequel Password. And this time, instead of having a file as a source, will help a string as a source. And to do so, people write Dash Dash from dash literal equals password equals ABC at 123 Of course, you can choose any password that you want and our secret is created. No, let's create the backing off our application. Open my sequel? Asked Debbie, not Yemen. As you might have guessed, the backing is a deployment, which uses a P A. Wasn aps slash B one. The Deployments name is my sequel DB, and it has a label called APP equals WordPress. We also have selectors with the condition off label matching and the labels which selector would be looking for. Our app equals WordPress and deer equals my sequel. Now let's goto part template parts will also have both of these labours, so no confusion Over there and under containing section we're creating Convene Accord, my sequel Container, which uses image off my sequel 5.6. We're setting up environment with Variable called my sequel Drood Password, and it will get its value from the secret, which we had created earlier. We're mentioning that containers 3306 ports should be exposed going down to volumes were creating an empty directory volume called My Sequel volume and mounting it to where slash lip slash my sequel directory Off our container, you might be wondering. We have checked out projected volumes, and we have checked out Empty Directory, which is normal volume. What about persistent volumes? Well, hold your breath. There's a catch about that and will visit that soon as well. Let's put a separator and at information about our service in the same mammal file as well . Yes, you can do it. You can create more objects from a single Yamil file because, after all, Gamel is just a markup language, which is used as a platform to declare over desired state. In Kubernetes cluster, we have mentioned details off a service called my sequel DB, which will expose the Port 3306 off containers having labors AP equals WordPress and here equals my sequel, which means the containers falling under the deployment that we're going to create now. Let's exit the file. Oh, and by the way, the service type is cluster I p. This time, let's use declared to matter to create our objects using cubes. Ideal Apply hyphen f my sequel, DB document. And we didn't help hotel kubernetes anything. It understood everything by itself, and our deployment and service mice equally be, are created. We can play around it a bit further by listing out or parts deployment services and checking out descriptions as well. It seems like we're quite done with our back and not escape the front end, which you might have guessed by now, is a WordPress container goto WordPress dash front and not Gammel. And here we are. It's another deployment called WP Dash Front End, which has label APP equals workplace on the same selector. Selectors asking for labels, app equals were press entire equals front end and no apart templates. The part is following the same labels, and it is created using an image card, WordPress 4.8 a party. We're also sending applicable off environment variables called WordPress database host on WordPress database. Password host is getting its value directly with my sequel DB, Whereas password, it's getting its value from a secret. We're also creating an empty directory volume called WP volume and mounting it to the part bar slash www slash html off our WordPress container. Just like previously, let's create or front and service as well. The service is pretty intuitive. You can get almost everything by yourself like what is the name off the service with selectors? It is looking for which port and parts it will expose. And finally, this time. Also, this type is load balancer. Let's debunk. Why are backing was Cluster I P. And Frontline is load balancer. Well, front end is going to be accessed by all of the users across the globe, so it needs to be able to talk outside the cluster as well, whereas back in will only be talking to front end so we don't need to expose it to the entire world. It ensures some more security to our my sequel databases. Let's apply this file as well, and our deployment and service are created. All right, let's list out over services. We have three services communities, which is default, and it's being used by communities itself. My sequel DB and were piss front end. If you look at the external I p column definitely communities and my sequel to be will not have an external i p since their cluster, I'd be kind of service. But what about WP front end? It's a load balance of service, so it should have an external i p the cluster I p for the load balancer has already been created, by the way. In fact, even the port 80 off WordPress container has been exposed to 31 002 which means that North board has been exposed as well. Then what about the external? I p Well, you see, we haven't configured any load balancer in any off the viens which we're using as our kubernetes cluster nodes. So So Kubernetes is trying to figure out what to do about the load balancer. But while it is added, it has exposed our service as Northport and it will be hosted on the external I p off all of the north, just like our previous nor poor service. But then how will we be able to demonstrate load balancer? Well, we will get to that. That's Notre I P off our node one again. And when we browse toe external I p. Colin Note Board combo, what we get is a WordPress installation page, which means that WordPress has been hosted successfully on our kubernetes cluster across all off the nodes, and it seems to be working as well after starting to learn communities. This is your first full fledged multiplier application, which is nothing short, often important. Milestone. Let's take our journey even further and learn more exciting aspects off kubernetes. 86. Blackrock Case Study: BlackRock is a financial service provider company. It means they're a bunch of investors who borrow money from the client, invest in market and make profit. They share their profit with the client, and everyone stays happy. BlackRock wanted its investors to work on cutting edge, bite on and spark combination, which would make their analytics faster and more accurate. But in such cases, people may end up using different environment from one another like different beytin versions, and the results may not be as great as expected for everyone. What to do then black Lockhart, a team off 20 ingenious who developed a Web on spark and Fight on and hosted it using Docker and Kubernetes. The Web app was deployed to all of the investors across the globe, and none of them had a difference off environment. They all were able to utilize this app for themselves and enhance their analytical output, which helped their climbs as well. And all of this was achieved in just 100 days. So with darker and kubernetes, BlackRock was ableto upgrade their software infrastructure and improve the performance within almost a financial quarter 87. Node eviction from a Kubernetes Cluster: We're starting off with no parts whatsoever. Let's go toe engine extort. Hamel It is. Yemen filed off a simple engine, except LICA said, With two parts, it's created a replica set and get the list of ports. This time, we'll print output wide because we want to have a bit more information than what s provided . Generally noticed shootings here. First of all, we didn't expose this replica set with a service yet. Still are parts have there I'd be our cluster. I'd be. Why is that? Well, do you remember the service kubernetes, which was created by default? These parts are connected to the community service, and second, we have Notre doing information. Our first part issued You'll on No. Two and second part issued you a node one. Now let's dream No. Two and see what happens. By the way, draining means completely vacating the part and making it unavailable for any she ruling whatsoever. The process off draining happens in two stages. First off, all the note discordant so that no further parts I should. You'll on it. And second it is dream, which means that it's going parts on reposition. But we're getting an editor here. It says that this part contains demon sets which cannot be dreamed. But this letter is pretty generous and the solution is also provided in the bracket. So let's use ignored demon sets flag and there we go or part is evicted. Now let's get the list off north. We have the same amount off Nords, all of the north already, but no two is unavailable for sure. Dooling, are she doing is disabled on it, which means that draining process was completely successful. What about the part which is, should you or no do? Then let's check it out. This part was smoothly reposition to node one, and if you can see it's age is 42 seconds, which means that the part on No. Two was killed and a new part was created on Nord one. So the number off replicas off the part is still intact, but the previous part died. This is how no drainage or not affection works. Before we go any further. Let's uncovered in the north so that it will be available for sure doing again. There we go, or Noto is back to normal 88. Demo: Rolling Updates | Rollout, Pause, Status Check: Let's start this demo with a list off parts we helped to parts from a previous replica set . Let them be Let's open update part dot Gamel file. It is an Injun next deployment with 10 replicas using engine X 1.7 point nine. Keep the image number in mind it is Engine X 1.7 point nine. Let's save and exit this file and create this deployment and our deployment is created. All of the 10 parts are up to date and available. No, let's describe or deployment just to reassure the ruling Update strategy is 25% max unavailable and 25% max Search, which, if you remember, means that this deployment needs to have at least seven replicas off engine extending at any given point of time. And even if it wants to provide an update, it can create 13 replicas at Max. Now let's use cubes. Ideal set image command on our deployment. Deploy Engine X and let's set our engine X container image to 1.9 point one, which earlier used to be 1.7 point nine. So, in a way, we are providing an update your deployment and assisted. The image is updated, but that just means that engine X 1.9 point one is the desired state off the cluster. Does it mean that the deployment is abraded as well? Let's check it out by Learning Cube CTL rollout status, followed by the deployment name. It seems like the update process is not totally complete. Five out of 10 replicas have been updated, and if we wait a bit further, we will reach at number 10 soon enough again. The pace off this process may be subjected to the size off the image rolling updates, strategy or network connection at Google Cloud's data center, which is the least possible one. In fact, let's not be satisfied here. Let's brought another update. Toe this deployment by setting the image toe engine X Alpine. And there we go. No, we can have a history off. Days off are deploy dash and the next deployment. They're on Cube CTL rollout history, deployment slash deployed dash engine X. And it seems that we had three divisions where zero being the initial state. We can also dig deeper into a certain revision My running Cube CDL rollout history, followed by the deployment name, followed by its revision number. Well, right revision equals two. It says that revision consisted off a bleeding apart template with image engine x 1.9 point one, just like how we can set parameters off the deployment and perform rolling update we can also undo the update is in Cube CTL rollout undo, followed by the deployment name and Under was successful. Let's get a list of parts. Well, all 10 off them seems to be up and running. By the way, if you're wondering what happened to the previous part off engine X, Replica said, I just deleted them. Lastly, we can roll back the deployment to a specific revision version as well. Let's roll it back. The revision to and when we describe the deployment, we can see that our image is set to 1.9 point one. And when we get to events, all we can see is deployment being scaled up and down multiple times. That is fine, because that's what we intended to do 89. Introduction to Taints and Tolerations: going back to the kubernetes architecture we have Master and Nords. Let's say we have a workload off three parts and it gets distributed evenly between all the nodes. None of the parts are on master, and even if any off them does try to get you do on master, it will be blocked. Why is that? Well, in normal kubernetes configuration, Master have declared a flag, or taint, which says no parts, which means it will not allow any part to be sure dude on itself and that is masters desired state. If a part tries to get settled on the master, it will bring master to an undesired state. Nobody wants that so Master will block it and the part will have to go somewhere else. Rest of the North's can also have Danes. Let's say no to has attained that it doesn't want to allow any part with label LP, which would stand for light pink in that case, just like services or deployments. Nor will have a selector which will look for LP label, and if our bar ends up having it, it will be kicked out from the north and will be shed yule on some other Nord. If by any chance other notes also happened to block it, do detained or insufficient resource reasons. The part will have nowhere to go and it will remain impending state. The part can use a ticket or a wildcard kind of provision to bypass, detained and mandate the Noto Should Hewlett. This will bring the north into a less desired state, but it is better than pending parts. This ticket is called a toleration when colorations are applied toe apart. Dig you conditional immunity toe all trains, which means part will get should do, but only after fulfilling some condition. Let's say the condition is to wait for 300 seconds or five minutes, which is the default for most of the parts. After waiting for five minutes, the part will be able to use the toleration and get saddled. In next lectures. We will be working with James and politicians 90. Demo: Scheduling the Pods using Taints: We're back toe or William Instances paid off. Jcp Compute Engine Let's create a new instance off William Name nor three which we will add to our cluster. Let's set its region to Asia South one and again we're choosing to release abuse and 7.5 gigs off memory for it just as previous North. Let's keep the image to open to 16.4 But this time our boot this type will be S S D instead of each d deeper system discord normal, persistent disk That said it's size to 20 assist These are generally costlier compared toa persistent are HDD discs But we have a special purpose here So we have chosen to use SST for this. Nord has allowed full access to cloud AP Ice along with http and https traffic and hit create or William is created. No, Let's Issa Sage into it and we have navigated to north three just like Master Node One and no. Two. You also need to install prerequisites Docker and communities on North three. Once you're done with that, a sausage back toe our master node And here we are. We have navigated back toe on Master Now let's get a token which can be used by North Tree to join the cluster on Cube Adam Broken list. Here we got the token. It is the same token which we had used to make Node one and no to join the cluster. But if you take a closer look at it, you will see that the token is already invalid. And the reason behind that is the token generated by Cuba. Adam is only valid for 12 hours and we are way past a loss now. So what to do now? Well, we need to ask communities to donate another token which can be used by further north to join the cluster. Let's turn cube, Adam Token create. And here we go or token is created, copied this token and paste it at a place where you can access it later on. No. Let's assess age back to north three. Syria, Iran, Cuba, Adam, Joint Command, just like you had run on node one and No. Two but test time, our token is different. We're using the token which we held unit. It's just a few seconds ago to join this cluster, and it seems like our process was successful. Hume Adam is suggesting us to run cube CDL get nodes on master to see if the nor joining process was successful. So let's do it again. We're back to master. Let's turn cubes, it yell, get nodes nor three is at least visible. It is not ready yet, but let's give it some time. Let's run the command again and we are ready. No tree has joined the cluster. All of the notes are running the latest version. Communities 1.12 point one, and this version is considered latest, at least by the time we're recording discourse. Now let's get a white description off all of the's. Nords. Well, there is no conflict off. Doctor wasn't either. All of them are turning dr 17 Point to the point to and Cuban. It is version 1.12 point one, which is quite reassuring. Now let's get the labels on our nodes. Apart from Master, all of the North are sharing similar labels where the only difference is that host name. Let's get another label. Two No. Three. And this time the label is this type equals as S. D. We're using this label because even though we have used SST as that this type off this node kubernetes won't realize it by itself. If we want to use this feature off this north to sort or should deal the parts, we need to tell Kubernetes explicitly that this known has this type equals SSDI, and the best way to do so is to provide a label on it, which is unique compared to all other nodes. Let's get a description off Arnold to see if own label has taken effect. And, yes, our label is visible. Now let's have a white output off list of parts. We help 10 parts from the previous deployment that we had created, and all of them assured you'll on either node one or no to none off them a shield on master because master is not allowing any parts to be. She deal on it with its nausea dealing team. And since we have added nor three recently and we have not created any deployment whatsoever, no Tree doesn't have any partitioning on its default name space, either. Now let's open the file test part dot Gama's. This is a simple Gamel file, often engine export, and the center off focus here is the spectacled north selector, not selector is away toe. Tell the Cube Sevilla that the part should be should yield on a certain type off note and just like a regular selector, note selector also uses Labour's toe. Identify the north it want to be should yield on Here are north Selector is having labelled this type equals assess D, which is same as what we had provided to North three recently. Ideally, this part should be shadowed on north three. Let's see if that happens or not that save and exit the file and create the board once more . Let's get the list off parts as white output to see if our recently created part issued you'll on notary or not and yes, it is. We explicitly controlled the north where our part was supposed to be. Should u'll using North Selector. The part. 71 seconds old and it is earning on North three. Perfect. Now let's make sure that no parts get should you'll on north three by painting it using Cube City l taint notes followed by the North name and detained condition here, the taint or the pain condition is that the parts. Having the label disc equal Speedy shall not bishop you'll on North three, which has been passed as a combination off label on condition separated by a colon. And our note is tainted. If you want to comprehend this logically, we're separating the parts which need to be. Should you'll on assessed denotes and persistent disc nodes, which in most cases is a real task that you would be performing on your cluster as well? Some off your parts me require SST for her performance while the others may not. So it's better not to shield them on it. And just by means off labels and pains, we are trying to make sure that the parts that need to be should yield on SST explicitly go there and the other ones don't touch that note. Let's describe or note again to see if the pain has taken place and it is visible. No, let's run on deployment called HDD with six replicas and engine X image. Let's label all off its part this equal speedy our deployment is created. We're getting a warning that Cube CDL run might be duplicated in future, but for now it is working just well, let's get a wide list of parts to see if any off the six parts issue do on notary or not. Well, it doesn't seem so. All of the new parts assured you'll either on node one are on No. Two, whereas North three still has only one ends in export, which we had salute earlier. No, let's remove the pain on North three and delete our deployment STD. The deployment is deleted, which means all of its parts are gone as well. Let's create the same deployment again with the same number off replicas, same label and same image, and see if we get any changes in the she doing well. We did get some changes out off six newly created parts by deployment HDD to off them a shield on North three, which we had just untended a few seconds ago. So this is how we use stains, colorations and labels to sort or should do our parts effectively 91. Demo: Autoscaling Kubernetes Cluster using HPA: We all know that despite of how much predictions we do, sometimes the number of containers or parts that we have deployed to sell fronton or back in, it's just not enough, and we need to skill them to remain safe in such situations. We can use a feature off kubernetes called HB A or horizontal part auto scaler. In this demo, we're going to use HP a tow automatically skill one off our deployments. How screen is divided in two instances off Masters Terminal, the terminal on top will be used to create and monitor the deployment, whereas Terminal on bottom will be used to create a pseudo or dummy lord. Let's start out by getting the list off parts. And as you can see, we have deleted all off the previous parts to start fresh. There are no parts, no deployments and just a default community service. Now let's create a deployment called PHP Apache by using Google's container registry image . It's be a example. We're limiting our CPU request Toe 200 Millicent Pew and we're exposing containers. Port number 80 with the standard warning off cubes. Ideal run Mike get duplicated in future or deployment is created. Now let's try to create a pseudo lord by deploying a busy box container to create a pseudo load. Let's been up a busy box container using cubes CTL Jon Kamen We're furnishing it with DT Y flag to directly execute it once it's created, we have navigated toe are busy box container. Now let's use W get hyphen Q and Ping, the Apache Web server, which we have just spun up on our previous deployment, and we got our okay, which was a pseudo load. Now let's deploy an HP A or horizontal part auto scaler using cube CTL auto skill deployment, followed by the deployment name, followed by usage parameter, minimum number off replicas and maximum number off replicas. Our usage parameter is CPU Person equals 50 which means that once 50% CPU is consumed, auto scaler will spin up a new replica to keep all off the parts or containers healthy and prevent them from overloading. H P A or horizontal Ponte Auto scaler has been created. It says that PHP a party deployment has been autos killed. Run cube, CTL, get HP A. Our auto skill is something successfully. It is 10 seconds old it has one part running and zero new replicas created. Now let's create an endless loop off the pseudo Lord, which we had created earlier by putting the same command which we had used previously under in finance. Why loop? And there we go. If you're wondering why there are so many or case are busy box container, it's continuously sending requests toe a PSP, a party service, and at some point of time, one off its parts will get 50% off its CPU utilized, and it will help to spin up a new replica. Let's wait for a while and let's keep this okays flowing. Now let's on cubes Ideal. Get HB again and there we go. The number off replicas has increased to one. So just within a minute also or deployment PHP Apache has scaled up. 92. Demo: Deploying Apache Zookeeper using Kubernetes: in this demo, we are going to create a production. Great Apache Zookeeper Cluster Zookeeper is a centralized open source server management system for distributed cluster environment. Zookeeper helps the distributed system to reduce their management complexity by providing low latency and high availability. To proceed with this demo, we recommend that you clean up your kubernetes workspace, which means you delete all of the parts deployments, services, replica sets or any other object sending whatsoever it is suggested because during the process are nodes will be drained out. Meaning, if we have any parts is element to the zookeeper cluster. They might be compromised, which is kind of a pain that we don't want to take at this moment. With that said, let's begin deploying. Oh zookeeper Cluster. Let's start by opening zookeeper, Hyphen hee start Hammel file. It is a service off name. Zk Dash, which is each is stands for headless. No, What is headless Human wonder? Well, we will get to that in a moment. Let's go to the spec field. We helped to ports mention here 4 to 888 for server and port treatable eight for leader election. The cluster I p feel has its attribute as none. It means that this service won't provide any cluster i p to the parts operating under it. The reason for doing so is we want to keep itself toe allow, identify us off our parts to proceed for the communication. But more on that later. Finally, we help select a card. App equals zk because we are deploying zookeeper application. Let's save and exit the file. Now let's open file off another service called Zookeeper C s Dottie Amell. This time the service name is the Kiss Es and it is a cluster i p type off service with only one port mention, which is 2181 Klein Port. The service also has the selector app Equal Seiki and retired said Let's exit that as well . If you're wondering, why did we create two services? The reason. It's pretty simple. We wanted or previous two ports to be handled by a headless service. Where is the client Port can be handled by a generic cluster I P service. Next up is the file card zookeeper a PDB Darty Amell, where BDB stands for pot disruption budget. This is a new kind of object that we are in counting here. But don't worry, it's not that complicated. Let's go from the top. We have a p A version off policy slash v one beat a one the kind off the object is barred. Deception budget Deception budget means that whenever we are providing any update to the board, how many parts can be compromised or what is the desired state off the part which can be compromised to provide the update here? The name off this part deception budget is CK Dash bdb, and it is using a selector with label off. App equals Ikea because parts will be holding this label as well. The Max Unavailable field has the value one, which means that whatever update we're providing, only one part can be disrupted or only one part can be unavailable. In other words, the update needs to be provided one part at a time. But that said, let's exit this file as well. If you're wondering, why are we just going to the files and exiting them? Don't worry will create all of the's objects simultaneously. Finally, let's go toe zookeeper as s pretty Amell. And if you are wondering what is SS s s stance for state full set, you might have seen a glimpse off state full set in the theory off workloads. A state full set is somewhat different from a deployment. Deployments are ideal for stateless applications because the order off part appearance or part creation doesn't matter at all, whereas in ST full sets the parties will be created in a certain order. The reason can be dependency on one another or the nature off application, which requires one step to be performed before the other step. So you can say that deployments are ideal for stateless applications, where a state full sets are ideal for state full applications starting from the top or state full set is using a P A version APS slash re one, which is same as deployments. Going further, we have named our state full set zk, which stands for zookeeper. Then we're jumping directly to Spec Field for state. Full set is also having a selector, which means that it will only go on the parts which will contain or which will match the label. App equals zookeeper, just like deployments state. Full sets are also capable to encapsulate replica sets and services within them. So we have mentioned the service name, zookeeper, headless or seek a dash H s and the number off replicas off the parts which will be created is three. Next up we help part template, starting with metadata. The parts are going to follow label app equal, zookeeper, Because that's what we have been setting up all this time. Then we have spec feel or containers will be named kubernetes. So keeper. And we hope mentioned the image pull policy equals to always, which means that regardless off the images availability, whenever the container is supposed to be created, the image will people always Next up we help image were using Google container registries. Kubernetes zookeeper was in 1.0. I found 3.4 point then image We have mentioned all off three ports which we had defined in headless and cluster I P service which our client server and leader election ports. Next up, we help command. This command will initiate our zookeeper cluster. The command is quite large. So we have broken it down. Flag by flag. Let's take a look at it. We're starting up shell and running start zookeeper for load by Silver's flag, which has the value. Three, which means that we will help three servers. If you had noticed, we had asked all state full set to create three replicas, which means that each container will so as one off the zookeeper sofas in all of the's servers. Data directory will be war slash lib slash so keeper slash data, which is a part off off the zookeeper configuration. Similar parts are data log and configuration directory, which are wire slash lib slash sue keeper slash data slash log and opt slash zookeeper slash corn, respectively. Then we have provided the ports which we had exposed using our services which are climbed port election, port or leader, election port and server port, which are 2181 treatable eight and two triple eight, respectively. Then we help pick time. This is kind off a definition or declaration. We're declaring our take time as 2000 milliseconds. So whichever configurations or whichever feels or which our flag will use units stick for them. Each stick will be 2000 milliseconds or two seconds. Next up. We help, innit? Limit Anderson. Value is then you might be wondering what is 10 means then what? 10 seconds, then milliseconds. Then minutes. Well, distant picks. And here thick is equal toe 2000 millisecond or two seconds. So our innit limit his 20 seconds in it. Limit binds our servers to elect a leader and join them within a specified amount of time. Which means that within 20 seconds the service will have to elect a leader and join it. Next is sink limit, which here is 56 or 10 seconds. Sink limit is defined to buying the service off the cholesterol, which are not leader. Catch up to the updates off leaders over. It means that if Leader is getting an update, the other service connected to it should receive that oblate within 10 seconds. That update can be a bit off a file update off configuration, some new added file or anything else. Next, we're providing some other configurations, like heap memory off 512 megabytes. Max Klein connections up to 60. Bush interval up to 12 hit 12 is ours. And here part in total defiance that every 12 hours our cluster needs to be refreshed. Then we help other configurations like Max session time out men session, time out and log level, which respectively, have 44,000 and info as their values, where the about to ill defined in milliseconds. Finally, we have volume, amount and volume in four. We're creating a new volume called Zoo William Off Time Empty Directory. If you have provisions for persistent days, you can use that as well. But for demonstration off, this demo empty directory works just fine, and we're mounting it to the part while slash lips last zookeeper off our containers before saving and exiting the file. Let's have a quick recap of what we have done. We have created a state full set, which will create three replicas off zookeeper. Contain a and all of the's containers will start their zookeeper instances with common configurations. These containers are mounted on an empty directory volume called Zoo Volume, and all of these containers are having label at equals zk or zookeeper. Let's save and exit the file. Now let's create this objects one by one, starting with headless service, we have created part deception, budget, state full set and the other service as well. They all follow the same format off running cubes it'll create hyphen f followed by the file name. So nothing new there. Now let's get the list of parts. Run cube, CTL get parts followed by the flag hyphen W hyphen. L app equals zk If you're wondering what that means, we're watching the output off this command and we're only taking a look at the parts which how the label APP equals zk If you have clearly or cluster earlier, then these will be the only parts that you have on your workspace. Let's wait a little longer and now all of these three parts are running. Thes parts are created by the state full set that we have just defined earlier. And if you take a closer look zk zero God created first, followed by ck one then zk too, which means there was a strict order off part creation and running no less execute these containers and print their environment variable called host name. We don't want to do this over and over again, so we can be a bit smarter and write a loop to perform this command repetitively run for i n 012 followed by do Cube CDL exact Z gay hyphen dollar I dash, dash host, name and finish. It's with a semi colons and done here. I in sick a dollar. I will be replaced by 01 and two so all of the's three parts will be executed and their host names should be printed. There we go. Our host names are 60 z Key one N Z K. Two in a zookeeper on symbol, which is another term for Cluster. So was used natural numbers as unique. Identify us and the store each service. I rented fire in a file called My I D, which is stored in a service data directory. They do it to keep back off each other. Let's examine the content off this. My I D file run cube CDL exact amount for our zookeeper containers execute zookeeper containers using Cube City, an exact followed by the command cat. One lib zookeeper data My i D, which means we're going toe cat the content off my i d. Before that. And let's encapsulate this command in a four loop by writing for I in 012 followed by do ICO my i d z k dash dollar one we voted to comprehend are put better and we got that unique . Identify us. Zk zero is identified as one. Zk one is identified as two and decayed to is identified as Terry. Apache recommends us to use a feature called F Q D N, which stands for fully qualified domain name. Instead, off eyepiece toe address the servers off a zookeeper ensemble to use them. We need toe. Obtain them first. So let's run for Luke again. And this time use Cube cereal exact to print the host name, followed by hyphen F Flag, which stands for fully qualified hit. Enter and we got the domain names or fully qualified domain names for each of our zookeeper container. Our silver zk zeros f judean zk zero dot zk dash h s dot default dot s We see that cluster dot local and both of the other servers are also following the same pattern. Apart from the fact that the host names are replaced respectively, we can take a look at you more conflagrations as well. We had mentioned in the CME resection for containers that the configurations off zookeeper will be stored in opt slash zookeeper slash corn. So let's cat the output off a file cards. Ooh Dorsey F G under the same part. It says that this file was auto generated and we are advised not to edit it. So we will follow the advice that whatever configurations we had provided while starting the zookeeper instances there intact and they're reflected just like they should have been . It means that zookeeper, cluster or on symbol is at least configured properly. Now let's see if it's working properly protested we will populate apart on zk zero so over and check if the changes made in 60 are reflected in the other servers ikebana NZ, Kato or not. To do so, run the Command Cube CDL exact zk zero, followed by the Command Z K C like dot message which will start to zookeeper, command line, shell script and daily with a native zookeeper command which is create followed by the part where we want to populate. The data followed by the data or part is high from sender and our data is high from receiver. We're providing this part because this is what we'll be checking on others of us if we check high from sender on other servers and they provide high from receiver Then our communication will be successful. Let's hit Indo the Watcher off So keep on notifies that a state synchronization event took place and some data is populated under the part high from sender That's clear Out or terminal Now let's execute this over zk one and run the command cube CDL exact Zeki one again Zika Sea lion got s H which will start the zookeeper come online And this time we're gonna get high from center. Earlier we had used to command create this time we're using get which means that we are getting the data under this part. If the spot itself does not exist, we will get an error. If the data under this part does not exist, we won't find it. And if everything is going smoothly, we should see high from receiver at the end. That's it. Enter and there we go. The changes made in the zk zero server are reflected in other servers as well, which means that our zookeeper on symbol is working properly. Before we exit this demo, let's clean up whatever we helped created, we might find something interesting over there as well. Let's delete the state full set. Zk and ER says that the state full set is deleted, but if we take a look at the parts, they will take a little while to be terminated. Selects watched him. And interestingly, state foot sets don't just create the parts in a particular order. They also delete a dominant in the same order as well. Once we're done with that, make sure to delete poor deception, budget and the services as well. This may seem like a smoothly gone process, but this is just the power off communities. We actually managed to deploy a production grade zookeeper and sambal, which is the base off big data applications. Like her group, we have started from creating a sample and next container, and now you're capable off running full fled zookeeper on Cymbals R clusters as well. 93. Pokemon Go Case study: book, Iman Go is an augmented reality based mobile game, which makes its users walk on the street and catch virtual Pokemon on their smartphone or tablet. This way, they can create a big roster off their Pokemon and possibly become a program on master if we consider growth stories off popular mobile games. Initially, they have less users who have joined the game, impressed by the idea or content. Well, what time Digging more users due to positive readings and word of Mt. And finally there. Users increased rapidly and the game becomes a trend. This city's off growth has a lot of advantages. Developers can determine what worked and what didn't. They can gain some revenue before Skilling White, so the risk would also become less and they can have enough time to spread out their servers and cover different countries in the world. But book him on goose case was different due to phenomenal fandom across the world. The hype for the game had already been generated when they launched their game in Australia and New Zealand. They were expecting a few 100,000 users to log in during the initial phase, which was around two weeks but that amount was crossed right after 15 minutes off release. In worst case, they had prepared for five times more users than expected. But just within a couple of days, the number off users had exceeded 50 times off their expectations. It means they have a million's off users To begin with, this was a huge challenge to overcome. They had to scale rapidly and make sure that they users didn't see something like this. The solution was figured out even before the problem occurred. Pokemon Go was developed largely on Java C Plus plus and C Sharp. But the services were hosted using Docker and Kubernetes as a manage solution provided by Google Cloud. This way they had a backup off Google's infrastructure, and they could scale as much as they wanted. Rapidly, I didn't have to set up servers are performed, immediate hiring or contract services. In fact, solutions were so efficient that while releasing the game in Japan, they simultaneously brought a poor 1000 New kubernetes lords while making sure the previously running ones were completely unaffected this way, digging more users while keeping their already large user base happy. In next sessions, we will learn kubernetes on cloud 94. On-premise Kubernetes or Managed Kubernetes on Cloud? Make a choice!: as we know, students have limited computing infrastructure. If you're using communities for learning purpose and you end up running three or more William simultaneously, your system may not remain in its best shape, which is not a happy day. On the other hand, industries, how much more sophisticated infrastructure like servers, which can hold large number off Williams. And even if they don't out off resources, they can always buy or rent more. So the solution should be simple, right? If you are a student, use cloud. If you are a professional with industry infrastructure, use local installation. Well, it's not that simple because there's a thing called hosted or managed communities as well. It exploited by leading public low providers like JCP, AWS or Azure. And it gives all the facilities off kubernetes cluster without the paint off, managing it. So before deciding whether it is right solution for us or not, Let's look at its process and cons. It brought. It's quick and limitless, scaling very efficiently, as seen in Pokemon go case study. But if your containers demand specific infrastructure like certain GPU or something like that, it may not be available in your region in which case you may how to resort to private infrastructure. You need to hire less people to manage a cluster and leave all the hassle to cloud providers. But if the existing staff is unaware off, this is not skilled enough. D may how to do a minor skill, uplifting being as you go and dynamic scaling reduces the risk off unwanted infrastructure investments. But callousness like leaving a news notes open and bring tears to wallets. High availability, load balancing and monitoring are mostly one click of a due to cloud services, but you may have to go through an unavoidable migration process. You can arguably get more reliable performance with more security risks. But again, both of these suspects undebatable. So for someone who doesn't want to scale their staff and wants quick scale up with less risks, manage, Kubernetes might be the best choice. But for someone who wants to keep their data absolutely safe and doesn't want to migrate, private infrastructure would be a better option. But as always, for you as a student, learning managing communities will definitely be a great asset and the skill. So let's get started 95. Demo: Setting up Google Kubernetes Engine Cluster: and we're back to Gcb dashboard. We're still running on our credits. Our bill is still zero, so we're quite safe on that front. Let's go to the navigation menu or the hamburger icon or to three horizontal lines on the top left corner off our dashboard, scroll down to the communities engine and click on the clusters. We don't have any glasses created yet, so we're having this screen. It is giving us a few options. Take a quick start, deploy a container directly or create cluster. The most friendly looking one seems to be create cluster. So let's go for it. Just like we, um, instances. Gcpd is prompting us to define some off the specifications off our cluster. But before that, let's see, What do we have on the cluster templates? We have standard cluster CPU intensive applications, Lester Memory Intensive Cluster, GPU, accelerated cluster and highly available cluster. All of these are useful for different applications, but we will stick to Standard Cluster. The default given name is standard last of one, but that's too cliche. Let's name it something else. Call it get s, which stands for kubernetes cluster. Next choices location type. We can have either a zonal or regional cluster. Which means do you want to spread out your cluster across different zones off a region, or do you want to spread out your cluster across different regions themselves? The choice off location type is permanent for high reliability. Regional might be the better choice, but we are not opening for such performance intensive applications at the moment, so we'll just run with zonal. Let's choose our zone. You can choose any zone you prefer. We'll pick Europe North one A. Next is master version. This means the Kubernetes version, which will be running on the master instance. The default is 1.9 point seven or G six, and we will keep it at that. Now let's define our note pool, which means the number off north and then machine types. We're creating a three north cluster, and all of these north will help container optimized always or C O s installed on them. They're using machines with one virtual CPU and 3.75 gigs off memory and our boot disk size . This 10 GB for node. We have enabled auto repairing to avoid any potential failures and our service account is compute engine default service account. Let's allow full access to all of the cloud AP ICE. We can also provide mentally that toe or cluster like label and pain. But that feature is under beater right now, so let's skip it and click on Save. Finally, let's click create. And here we are or cluster is created, re helping the name off the cluster, which is K and S Cluster. Our Zona location, which is Europe. Not one. A amount off north or cluster size, which is three daughter lumber off course, which is three visa abuse. Because each off the North has won re CPU and total memory, which is 11.25 gigs because each off the North has 3.75 gigs of memory, we can look at the description off this Lester by clicking on the name off it. Starting from the top. We have master version, which is the D Ford, one endpoint for this particular cluster, and some other information suggests, bought at the strange and information off stack, driver longing and monitoring. Below that, we also have no pool specs, which are pretty much what we had provided. Let's go to store it section. Well, this is not about the story. Disc size, off nodes. It is about the storage objects within the cluster. Since we haven't done anything to the cluster, both of the fields off persistent volumes and storage classes don't have any special entries. And if you notice closely, there is a storage class called Standard, which provision Google Compute engines. Standard Persistent disc will use the storage class in future to provision some persistent volumes. Finally, let's go to North section. These are the needles off all three notes off our cluster these long and complicated names at the names off Reince, which are used to create the cluster all off their status is ready. Next up, we held, requested and allocate herbal CPU size, while the requested Cebu sizes are different. Locate. Herbal size is common, which is 9 40 mil ISI, P U, which means more or less one V zebu. Next up, we help requested memory and locate herbal memory. 2.7 gigs are located BLE, whereas we had provided 3.7 gigs. So are the rest off the memory. Well, you can comprehend. Locate herbal memory as user space memory off all of the's virtual machines. Which means remaining one gigabyte will be used by the Colonel Space off these machines. Till now, we haven't requested on located any storage yet. It seems like we know our cluster better than before. In next lecture, we will navigate through the cluster and play around with it a bit. 96. Demo: Accessing GKE Cluster: In last lecture, we had created this cluster on geeky E or Google Kubernetes engine. This time, let's connect to it. Let's navigate through it. The most intuitive option seems like pressing that connect button. Let's do it when we click on Connect, Google prompts us a command to run in Cloud Shell. Cloud Shell is a CLI shell provided by Google to perform all sorts off commands. You can define cloud shell as an ssh access to a William, which already has G cloud command line set up for us without further delay. Let's click on Run in Cloud Shell and a cloud shell has opened. Let's resize it a bit. Well, it's connecting to make it look prettier. And there we go. Google welcomes us to our cloud shell, and they're friendly enough to print that command on terminal as well. All we have to do this press enter. But before we do that, let's try to comprehend this command. It says that we are getting credentials off a cluster named K. It s cluster from Project Rapid being 218812 on Europe. Not one is one. In a nutshell, it is giving access to arcade as cluster to the VM, which is hosting the club shell. There we go now. We should be able to run cubes. It'll come online. Let's run cube CTL get nodes depending on your network connectivity zone or region which you have chosen, or the Lord on Google cloud itself. The speed off operation may vary a bit, but you will definitely get the fruitful results. And here we go. Here's the list off all of the's three notes, which we have seen in previous lecture as well. It looks more or less like output off the class which we had bootstrapped by ourselves. But there is a little difference. Check out the rules column. None off the nodes. How master roll. Why is that? Well, we have not bootstrap this cluster. We have just provisioned it. Google has bootstrapped it and it is allowing us to use it as a hosted kubernetes or manage kubernetes cluster. So the master is managed by Google. What is the I P address off? Master, What is the William name off, Master, what is the size off Master? What is the architecture off master? We know nothing about it all we know is a deserting kubernetes version 1.9 point seven because we had said so while creating the cluster. This doesn't just add another layer off reliability and security, but also saves us from handling the pains off master, which blocks parts from being Should you'll on it. That's a Ncube's ideal. Get Pardes and, as expected, no resources found moving further. Let's help parts from all names basis, and here we get a long list again. But this time the parts are not the same. All of these parts are on North instances and not the masters. Parts are available here. Can you find Q B P ace over que controller manager even cubes? Should Ula None of them are here because master is completely out off access. Instead, what we do have is Q proxy for all of our nodes, a pre configured kubernetes dashboard DNS flu and be as the border network and heaps toe for monitoring or kubernetes cluster. It feels like an entirely different luster from what we had bootstrapped by ourselves, which it is on the back and at least but on front and will be using Cube CDL command line just the way we use on our previous luster. So know what is there in next lecture will create an application on this geeky e cluster. 97. Demo: Persistent Volume and Load Balancing on GKE: we have created WordPress application on Docker composed on onboard Stop kubernetes cluster . Let's try it with geeky right now what we are done with writing Amel files. We have written a lot off them, So this time we will use kubernetes engine samples provided by Google and will directly clone them from get hub use get blown, followed by this link. There we go. Let's see if the repository is here. Well, it is kubernetes engine samples directory is quite huge, so let's take kubernetes samples out off. It has Goto Cuban Re Samples Directory. Let's list out the components here, and there are a lot of examples, but we want to focus on WordPress persistent discs. Example. Let's navigate the workplace dash persistent Dash this directory and when we look into it, we have a bunch off yamma files. Ah, few off them seem quite friendly. My sequel, Dottie Amel, my sequel service wordpress dot Gamel and WordPress service start Camel. These are all the files which we have operated on previously, but we have a few new files as well, which are my sequel volume Claim and WordPress. Volume claimed Gargamel What I does, Let's check them out. So it turns out that my sequel Dash volume claim not Gammell is a file for declaring a persistent volume clean. Her assistant volumes are storage objects in communities and since their persistent even if the part dies, the volume doesn't vanish. So there needs to be a mechanism through which the new bond part can mount itself with the volume. So just like sources with projected volume, position volumes help PVC or position volume claims. These objects are used by parts toe claim a certain volume and use it afterwards. Starting from the top, our kind is positioned volume claim, which is using the same A p A version as part or replica said, which is V one in the middle later section, we have given it its name to understand persistent volume claims better compared them with parts. They're actually pretty similar to parts parts consume Nord resources, whereas persistent volume cleans consume persistent volume resources. Martin asks for CPU and memory for this position. Volume claims ask for storage inspects section. We have two fields. Access moors and resource is for different applications. For assistant volume claims have different access, more like the dried ones or read only many etcetera. Finally, it is going to ask for a block off 200 gigabytes storage. Similarly, you can also look at WordPress volume claim. Apart from the name off the claim, nothing is different. So in total were asking for 400 gigs off storage. Let's apply both off these yamma files and create persistent volume planes. You might be wondering if persistent volume claims are a way to claim stories from D. C P who actually provides the storage. Well, if you remember a couple of demos ago, when we look at the storied section off our Jiechi cluster, we found a storage class name. Standard storage class is responsible for provisioning storage to persistent volume claims . Now let's create a secret for our my sequel password. Next up, let's take a look at our deployments and services. Let's start with my sequel deployment. It is quite like what we had used previously, apart from the fact that this time it is not using an empty directory as a volume, but it is using a persistent volume in Williams Field. We held the name off the volume, which is my sequel, Persistent Storage, which means we're asking kubernetes toe create this volume called my sequel Persistent Storage and our Next lines will determine that this volume is going to be a persistent volume. Reason is just like sources with projected volumes were providing position volume claims with this volume, followed by the claim name, which is my sequel. Volume Claim or Mount Part is also seen as what it waas when we previously deployed WordPress. Let's exit this file and create this deployment. Next up we have my sequel service off Cluster I P Type. It is exactly like what we have used previously, so we can take a quick look at it and exit this file. Let's quickly create the service as well and check out our WordPress deployment just like my sequel deployment here, too. The only difference is the type off the volume where the hell provided persistent volume and you also mentioned the clean that's create this deployment as well. Let's check out our WordPress service, which is also same as last time, but this time we hope that it's type being load balancer makes a significant impact. Let's create a service unless you get a list of services till now, everything was going just like how it went with our bootstrap communities. But from this step onwards, we can see the power off hosted or managed kubernetes off, geeky or lord. Balance of service is working perfectly, and we have an external I p 35 to 28 0.119 not 91 dedicated to our WordPress application, which means that we don't have to reveal the external i p ease off the Nords. And even if we host a few more applications on different ports, they will have their individual i p's so we will have no I p conflicts whatsoever. Furthermore, when you describe the service WordPress take a look at the events, geeky is constantly making sure that the Lord balancer is working properly. Now let's go to a new tab on our Web browser and simply put the external I p on the load balancer. I p off our application. No coal, no combination of I p and port number. Nothing. Just a simple i p. And there we go or workplace is up and running 98. Demo: Kubernetes on Microsoft Azure Cloud: we have seen managed on hosted communities on gcb, which stood for Google Cloud Platform. Now let's move on to Microsoft Azure. Let's open the Web browser again and go to this address azur dot Microsoft dot com. And there we go. We are on the home page off Microsoft Azure Cloud. Microsoft Azure is a cloud computing service created by Microsoft for building, testing, deploying and managing applications and services through a global network off Microsoft managed data centers. We can see a bunch of information worldwide locations, which are covered by as a data center, like big companies and clients, which use Microsoft Azure on a few MSP statistics again, just like G, C P or any other cloud Breuder trying Microsoft Azure is also free. Let's click on start three button to set up our account. This page is all about what would you get by creating a free account? Let's call further to get more information, and there we go. It says that we will have 12 months off, some off the free popular services and 25 plus always free services. No white. This sort off very drainage. Unlike JCP, Azure has divided. It's free account provision in terms off hourly usage and consumption cost, which means that free usage off some off the resources will be calculated on hourly used pieces where it's free usage off. Other resources will be calculated in terms off deducted credit. Once we run out of credit, we won't be able to use them for free and mind will. Even though Azure provides many off the free services for 12 months, the credit balance that we get in terms off currency is only for a month, which means that if you don't use this resource is within a month, you'll waste accredits. For example, if we scroll further, we can see that some off the basic compute provisions like virtual machines, storage disk, blob storage, database server instances, etcetera are free. But for limited usage, 7 50 hours off Lennox or Windows Virtual machines, two SS tees off 64 gig size five G B off blob storage are 2 50 GB database. Instance. Further, we have the list of services which are always free. For example, container service, which we will be using soon enough, is in always free service, but for using the container service or kubernetes cluster as you will be deploying Williams , which again will be chargeable if we don't out off credits for three hours. Let's crawl back to the top and click on start free button. Just like Google Cloud Platform needed a Google account. Microsoft Azure needs a Microsoft account. Generally, we may not have a Microsoft account, so let's create one. You can use your existing email address or get a new one. We will use an existing one click next. Then let's choose an appropriate password hit. Next again. You will be asked to provide a verification court, which you will get on your email address, and you need to choose if you want to receive any promotional emails at this point of time , we don't need toe get promotional emails. So let's just get done with the verification court and hit next. You may also be asked to type the text from a capture just to let them know that you are not a robot. On the next page, we need to provide some of the personal information, just like we had provided to Google. The first section includes your full name, email address, phone number, etcetera and some off the details. Me ready based on where you live. Followed by that. We have identity verification through a phone and an identity verification by a card. Just like JCP here, two guards which do not allow auto payments may not work. Also, ASHA has another rule with states that only one account can be created using one credit card. Once you have provided your phone number and card information, next up is signing the agreement. Click on I agree and let's continue. And here we are on the Microsoft Azure dashboard. Let's take a tour. First of all, we have to create resource button. Then we have all services which are provided by ASHA. Next up, we have favorites. These are global fear. It's but as we keep on using them, the favorites me change. We have a friendly search bar, followed by some the configuration options. Honestly, it may not seem as easy and endured oh, as gcb, but it's just another cloud platform, so if you use it enough, you will get used to it. Click on the button right after the search part, which is used toe open the azure cloud shell, just like D. C. piece cloud shell. Once we click on it, we get the welcome prompt. We have an option to choose between Bash and Power Shell and for convenience will go with Bash to use Cloud Shell effectively. We need to mount a persistent storage to this cloud shell. Instance. The persistent storage is a part off free subscription itself, so we will not be charged for it. It's click on create storage and move further. The next prompts is that our cloud drive has been created and our cloud shell is being initialized. There we go. Cloud Shell set up its successful, but these phone seem quite dull, so we'll make them a bit more refreshing. Going further. Let's create a resource Group on Resource Group is anxious way to monitor a bunch off relevant resources under one location. We're naming our resource group as C C CKs and we're setting its location toe East us just like we had regions and zone in G. C. P. We have location in Microsoft Azure and the resource group provision is successful. Now we will create a kubernetes cluster within this resource group itself. You're using the command easy a ks create which stands for azure community services create . And we're instructing our cloud shell to create an a K s cluster with one Nord on C. C. A. K s resource group, and we are also enabling additional monitoring. Tow it. Lastly, we're providing the authentication way as ssh keys. A lot of things are going on behind the scenes off this command Azure s provisioning. Ah, whole new William as a node on it is installing all the requisites like docker kubernetes etcetera on that virtual machine. So this command might take a significant amount of time to finish. But keep patients as the result will be sweet. Once the command has finished successfully, we will get the cluster configuration or the output of the command in Jason format. Let's take a look at some off the known terms. Well, the node count is one. The maximum number of possible parts is 110 and the note full name is no pull one for the more the disc associated with this notice off 30 g b and these resources are provisioned under resource group Sisi s in east us. And finally, we also have the cluster name, which is a ks cluster toe. Access this cluster run command easy a ks get credentials so that we can import the credentials off the cluster toe, our cloud shed and the commander successful and we can verify it using cubes. Ideal get nodes as we have requested. This cluster has just one note and the output is pretty much similar. Toe all other cubes it he'll get no command, which provides information such as status off north rolls off Nords the time since the note is up and running, and the Kubernetes version, which has been installed on it. Let's go even further and run or standard in the next deployment with a load, balance or service at last. Our deployment and service have been created successfully, and the load balancer is also working just fine because our service has an external I p called 137 lot 135.78 not 74. Let's navigate to this I P address on a separate browser tab and there we go or Engine X is up and running. It means that the cluster, which we had set up, works just perfectly fine. Just like gcb, you can try all sorts off objects and experiments with this, a cancer cluster as well. Let's minimize the dashboard and explore a few more aspects off this cluster. Navigate to resource groups and we can see a bunch of resource groups already created out of these, the one which we had created. It's called C C A Ks, which is located at East us that's click on it. It has one resource called E. K s Cluster, which is a managed kubernetes. Instance. By clicking on it, we get even further details about the cluster, which we have just created. If you remember, we had enabled monitoring on this Lester as well. So let's go to inside stab. These insights are mostly about resource utilization in the time frame mentioning time range. We can make it more concise by adding other filters, and if he scrawled further, we can find all sorts of information like notes, CPU utilization, node memory utilization, nor count etcetera. We also have abs like nodes, controllers and containers, which provide respective information. If we go to note stab, we can see that there is only one Nord earning within the cluster, and if we had to control a stab. We have a controller starting with my engine X, which if we look closely, it's a replica, said Controller, which has been created by my ends Next deployment. Now let's go back to Cloud Shell and lead the resource group, which we have created. Use easyGroup believed common, followed by the name off the resource group and provide yes to give it permission to delete the resources which are currently being used as well. When we're done, we can close the cloud shell and sign out from this account. 99. Demo: Extra - Docker UI with Kitematic: Everyone gets tired of command lines and terminal screens at some point in their life. That's where we need G Y applications. Dr Freud's off feature Ridge Ey with its Enterprise Edition, and it is called UCP. But since this course is free and it only covers content which is free to access and set up , we will use 1/3 party. Joy for Doctor, called Kite Matic. Kite Matic is a well made open source GeoEye application for Docker, which supports single host Dr Instances. At the moment, without further talk, let's jump straight to its get help page, where we can download it's binaries. Go to your favorite Web browser and navigate to this address. As you can see, pragmatic is available for all popular platforms like Windows and Linux. We will download wineries for Open to Lennox. The current version off right Matic is 0.17 point three, which might have been operated by the time you're watching the course. Once the downloading process is complete, let's head to downloads directory on our host machine. Let's extracted here. As we go deeper, we will see a star inside the star, so let's extract that do finally we have a tar file called data which needs to be extracted as well. I know too much extractions, right inside data we have directory called Been, which contains the executable called Kite Matic. Simply double click on it and quite Matic Joy will be up and running just to give you a brief understanding pragmatic talks to both Dr Host and Docker Hub The GOP is pretty simple and interactive. It is easy to predict at this stage that these are the reports on Docker Hub along with that pulls. But let's not go too fast. This home page has a lot to offer. First of all, we have the logging option which allows us to link our doctor, help account to Kite Matic. Then we have a big search bar for docker images which are fetched from Docker Hub. Then we have recommended or featured images on the right side. We have tabs like my people and my images my repose show repose on your current doctor Help account It will show nothing since we haven't seen them yet. In other words, this is just a plain g y with nor doctor have account linked to it. Yet my images will show images available on our local host machine. If you notice these images from various models, do look quite familiar on the lower end off the left side, we have a small gear icon, which represents apps settings. You can customize them the way you want. We will keep them as they are. Then we have a chat tool which we don't need at this point of time. And finally, we have a link to Dr CLI clicking. This will open a terminal window for us to use Docker commands. Let's close it. We have a list off recommended images by DR on your screen, and we can create a container based on any image available here with a single click. Isn't it amazing? Let's check out how we can do it. We will select hello World Engine X image, which is a light weighted, customized engine X image to demonstrate features off. Pragmatic at the bottom off. Hello World Engine X. We have the number of downloads and likes for the image on the left side and create button on the right side. Let's click on the create button to create and run a container based on this image and there you go. Doctor is connecting to Dr Help to download the image. Hello World Engine X. Because it is not available on a wide local registry, it might take some time to download the image. After the process completes or container will be created successfully. You must have noticed the container lock screen here. It will so long outputs off the current running containers. We have zero number off locks for hello World Container for now because it has just been created, you might as well remember the difference between created and running states off container . When the container is created, it won't have any logs, but as soon as it gets running, we'll have some logs of whole container on the left side of the container. Lock screen tells a list off running and stop containers. It even includes those containers, which are not started by Kite Matic. We do not have any pre running or stop containers at this point of time, so the only available container is hello World Engine X. We also have the volume section at the bottom right corner off the screen. It is the volume website files, which is mount on our running container. Hello world. We can enable the volume in order to edit to file stored in it. Let's click on enable volumes and see what happens. We have been directed to the part. Home slash documents slash kite Matic slash Hello World Engine X on your local machine. Kite Matic has exposed the containers volume as a directory on our local machine, which allows the users to access it easily. As we access the volume website Underscore files, we can see the D fort index dot html when important thing to notice. Here is as soon as we enabled volumes for editing the engine X containers stop turning. It was removed and restarted with a new volume flag to reflect any changes made in the volumes. We have not made any changes in the volume here about volumes on the top right corner. We have Web Review section. It will allow us to see the containers result on our Web browser. Let's maximize the WEP review toe. Have a good look at it. Hello, World engine X container is running on local host or 3 to 769 Let's get back to quite Matic and verify the results. Using Doctor CLI on its terminal, the doctor Pius hyphen E to list out all of the running and stop containers. The result reassures that hello, world engine X container is running successfully on the same port. Let's return to guide, medic and stop the container. As we can see, there are multiple icons available exactly about the container long section we have Stop, restart, exact and docks icons here. And as their names suggest, stop and restart. Icons are used to stop and restart containers. Exactly. Icon is used to execute a command on a burning container. Docks Icon will direct us to Dr Documentation. Argentina has been stopped. Never get to the left side of the screen where all containers are listed. We want to delete this container, so click on the cross icon to remove the container. A dialog box will pop up toe. Ask your confirmation about deletion process press, Imu and or container has been deleted and we're back to home screen. We can also search for a particular image here. It is similar to Dr Search Common. Just take the name off the docker image that you want to search will search for the image off Docker registry itself. So we'll type registry and press enter. We got all the doctor images which include registry in their names. The first result is the official Docker registry image with 419 million downloads. Just like hello world and genetics. We can play around with this on any other images as well. So this was quite Matic the doctor G y. You can play around with it further and you can even link your own doctor hub account to use it interactive Lee. 100. Demo: Extra - Minikube Series | Installing Minikube: before we install Mini Cube on our Lennox machine Lexan a standard apt get update and install some dependencies. We're installing its DDB s transport package. And if you're wondering why have applauded the flag acquire Force I p four equals True, it is to make sure that the response doesn't get stuck while searching for an i P V six address. Because this system is using an I P V four address. If you don't have such a conflict, you can skip this flag once we're done with the update. Let's download an ad. G PG R Gene you Privacy Guard key for communities using Curl Command followed by this link , we got a confirmation with Oki. No, let's add kubernetes update part to the sources dot list files to verify the addition Lesson after get a bit again And it was a success. If you can see the line starting with, get 12 has fetched packages from Kubernetes Zaenal main repository, just like Docker or Virtual Box. Now it's time to install Cube CDL, which is also a prerequisite for running Mini Cube and and Sudo Apt get install cubes, it yell, employed a default hyphen by Let's see if the installation was successful. Running Cube CTL version and get the installation was successful. No, let's download Mini Cube from its official repository using called Again. Now let's make Mini Cube executable using C H Mart, which means change more plus X. And finally, let's add this executable to user slash local slash bin directory and optionally. You can remove it from here as well. All right, how many cubes set up is complete? 101. Demo: Extra - Minikube Series | Getting started with Minikube: Now let's start our single node mini Cube kubernetes cluster, using many cubes, start command and provide its virtual ization driver or VM driver as virtual box. If you remember, we help in start virtual box well, setting up docker swarm as well, so our machine already has remember set up. But in case you have removed it, you can go back to the doctors form, set up lecture and check out the installation instructions for what she'll box. That, said Indo. If you take a closer look at the process, is there pretty much similar to how we bootstrap or regular kubernetes cluster? It is getting I P addresses off Reims. It is moving filed to an isolated virtual machine. It's setting absurd the thickets. It is connecting our shelter cluster, and it is also setting up cube conflict, which is used to set up kubernetes configurations. Looks like the processes are done. Let's try and run a deployment without being took working. Let's simply run our vanilla engine X over and expose its port 80 as a Northport service were using latest engine next image, so let's hit enter. We get our standard warning off Cube City Children might get deprecate ID in future, but our deployment is created when we learn Cube City will get parts. It looks like a single part deployment has been created and or container is still under creation status while it is being created. Let's describe the part using cubes it you'll describe, and the description looks pretty similar to all the previous parts that we have created, which means that whether you're running a standard bootstrap kubernetes cluster on your premise, all urine in kubernetes on Cloud all your running Mini Cube Cube City in command line and its performance remains the same. And while we were taking a look at the description, it looks like our container has been created and started, which is good. Let's on cubes ideal Get parts again and there we go. Our engine export is up and running. Same goes for our deployment, and there is a gap off six seconds between the deployment being created and the part being created, which is just fine. While we're added, we can also take a look at the deployments description starting from labels. Everything is similar to a regular kubernetes cluster, including rolling of great strategy and events. No. Let's expose our deployment engine X server with service type Northport and our services exposed. Since we have used Northport service, we need to know which off our public ports has been mapped to contain a sport. 80. Let's turn cubes it he'll get s We see our services. It seems like containers. Port 80 is mapped to host machines Public port 30 to 29 Fair enough. Which means that a combination off host machines I p and the public port exposed should give us ends. Next welcome page. But mind well here, Host machine doesn't mean this machine. It means the way m on which many Cube is running. And to get its I p. Let's run many Cube I p r I. P is $192.168.99.100 that's use it. Open your favorite Web browser and ended the I B port combination. There we go. The Reem running Mini Cube is hosting Engine X on his 30 to 29 port. Great. Now let's go further and take a look at Kubernetes dashboard using Mini Cube Dashboard Command, and our dashboard is opening on our Web browser on local host sport 37339 Here we are. This is Kubernetes Dashboard or kubernetes G Y. It looks simple, intuitive and pleasant to work with. Starting from top left, we have Cuban. It is logo. It is joined by a search bar which can be used to filter out objects like deployments, Pardes, etcetera and on top right. We helped create Button, which is used to create kubernetes objects, but we will get into that later. Take a look at workloads. We have deployments, barbs and replica sets earning, and it seems like all of them are up and running perfectly. Yet the 100% means that all off the deployments, all off the parts and all off the replica sets are in their desired state. Below that, we have details for all of those starting with deployment. The details presented here a pretty similar to the output off cube CTL get command. But here we have GeoEye representation off all and instead of having a column off state running, we have a green tick mark which indicates its running state and what the first from the output off Cube City will get is that we also get a list off labels which are attached to these represented kubernetes objects below workload section. We help discovery and load balancing, which essentially lists out all off the services we have to services earning, among which first is Engine X Server, which is Northport Service, which we have just created a few minutes ago. We have all sorts off retails like internal endpoint Blaster I p off the service external endpoints which are not available right now, each off the service and labels and below we help communities, which is the default service, and at last we help configuration and storage objects. Since we didn't provisions any volume or we didn't use any value mounts, we just have one projected volume, which is default token, which has been created while setting off the minute you plaster on the left pane. Starting from the top, we have various constructs off the cluster suggests name spaces, Nords, persistent volumes, rules, storage glasses, etcetera. Then we held options to navigate to particular workloads, particular services or ingress or particular storage objects, and at the end, we help about and setting steps. Let's start with names. Space is just like a regular kubernetes cluster. This mini cube cluster also has three names spaces, which are Q Public Cube system and the fort. Since we did not create any user defined name space, thes three are up and running. Since the cluster started, then we have North information. You might be wondering if Mini Cube is just a single node kubernetes cluster. Why do we have no information in the first place? Well, many cube and kubernetes dashboard are different entities. Mini Cube Dashboard Command just enables us to use Kubernetes dashboard, which shows the current state off the single node cluster. But the same dashboard can be used with a bootstrap cube Adam cluster as well, in which case you will have more than one Nords, just like other workloads. The Nord card also has details such as labels, state resource requests and resource limits. You can navigate to other tabs like persistent volumes, rolls, storage classes, etcetera as well. But we will jump straight to something which looks like a drop down menu, and it says name spaces. It is a drop down menu, and it is used to switch from one name, space toe another. Currently, we are in default name space. But if we switch our name space, the available objects will also change. We can navigate through different foreclose as well. Let's goto deployments, and we get the same output, which we got on the home page. But this time this output is not accompanied by replica sets and parts. Let's click on the name off deployment to see what happens. Well, well, this does look pretty similar. In fact, this is a looks like the result Off Cube CDL describe command. When we describe a deployment, we have all sorts off information like name, names, basis, label annotation, creation time, etcetera. Then we helped a replica set, which is going by this deployment, and at the end we help events just like the output off the describe command. Since we haven't initialized any horizontal part autos killers, that field is empty. Similarly, when we goto part step, all we get are parts. Clicking on the pardoning will also give the output off Cube City and describe part command . But let's not be the predator. Let's see, what are these four lines? It says action and logs. It's click on them and there we go those four lines showed the logs off her engine X Server part and language. Ian R D. G. We can download the logs we contain to size off text, text, color, etcetera and under the action tab, we have two options. Toe. Either just view or edit the Gamel file or to delete the deployment altogether. Let's go back to the overview on home page. Let's head back, toe or terminal and stop this cluster using Mini Cube Stop Command. It is important to stop your cluster when you're not using it, but your system might go into O M R Out off memory state. And finally, let's delete the cluster using many cube delete and the Glasser is deleted. Simple commands, simple life. 102. Introduction to Serverless Kubernetes: Hello and welcome back to The Container masterclass. We're back with a significant update this time with something that has become the center of the conversation around the Kubernetes ecosystem. And that is, Kubernetes is going serverless. To put it simply, serverless means not having to worry about the underlying infrastructure at all. For example, while operating a regular Kubernetes cluster, we have seen that the user interacts with the master and passes on the request through the master. When we use hosted Kubernetes is like Google, Google entities engine. We, as users docked to the hosted community service providing platform instead, like Google Cloud. But we do have to manage the cluster. We are very well aware of the cluster configurations and we also have to keep an eye on the resource utilization to see if we have to scale the cluster for better load handling. In other words, hosted Kubernetes allows us to host and manage the cluster on their resources. But think about this case. All you need is a working desktop browser. You have to do a few clicks here and there. And boom. Your containerized application is live. That is serverless Kubernetes. She's behind the scenes in the back-end to serverless Kubernetes service provider also has lots of Kubernetes clusters deployed, but you don't have to worry about it. This has a few implications. First, you do not know the full details of the cluster that you're operating on. There are exceptions, but we will get into them later. Second, the smallest unit of acquisition is not a bunch of virtual machines anymore. You are merely given a separate namespace and it is highly likely that other users are also operating on the cluster your containers are on. But you'll never clash because of the namespace isolation and are back access policies. This makes deploying your applications even faster, economical, and easier. Google Cloud's cloud R1 is a great example of serverless Kubernetes offerings. In the next lecture, we will get hands-on with Cloud run. Till then, happy learning. And I hope you have a great day. 103. Activating Cloud Run API on GCP: Hello and welcome back to The Container masterclass, or as we very informally call it the CMC. As you may remember from many, many videos of this course. This is the Google cloud dashboard. Cloud R1 is a part of Google Cloud Platform offerings. So much like Google Compute Engine VMs or gk is hosted. Kubernetes is the way to navigate is through the hamburger icon. Before we start using Cloud run, we need to make sure that we have enabled its API and our GCP project. Go to API and services. Click on dashboard and you will find stats about a list of APIs relevant to the products used under your GCP project. It shows that we have had the most requests made to Compute Engine and logging APIs. Which makes sense because both GCE VMs and GK clusters are hosted using Compute Engine VMs. To find the cloud run API, let's head to the Library tab. You can see a bunch of APIs divided by categories of usage. We don't want to keep scrolling for eternity. So let's use one of the best inventions of computer science to search function, type cloud R1. And the first result you see should be our target API. In case you get some different results. You can remember this little icon, which looks like a stylized play or forward button for a music player. Once you land on the cloud run API page, you can notice a lot of details like when the API was last updated, or it's one-liner description, or it's overview, or even links to its documentation and some Quickstart tutorials. Let's take a relaxing breadth for a moment and look at the overview of the cloud run. This is important because this is how Google describes and wants us to perceive allowed run as a product. It is a managed compute platform. Of course, because as we mentioned, it does run on Kubernetes cluster in the backend. It enables you to run stateless application containers in workable via HTTP APIs. Http APIs are fine because everything that we have done so far in this course has used HTTP requests in one way or another. But the most crucial detail here is stateless cloud RUN of recording this video in November 20-20 only allows stateless applications. So no stateful sets. But deployments are stateless. So we should be able to play with them. Rest of the boxes, how it abstracts away the infrastructure management. We have already seen that in the last lecture. So let's enable the API. The API is ready. It is showing some previous traffic because I had used it for testing earlier. Now, let's head back to cloud, run through the hamburger icon. The list of services is empty since this would be your first time using it. The API is ready and we can start creating our first Cloudera and service in the next lecture. Till then, happy learning. And I hope you have a great day. 104. Your 1st Service on Cloud Run: Hello and welcome back to the CMC. And in this lecture, we will create our first cloudant service. We're on the Google Cloud run Page and we'll enable its API in the previous lecture. Let's click on the Create Service button and you'll be guided to the service settings page. Again, Google is being elaborate with the descriptions there, elaborating what the service is. This is helpful and frustrating at the same time because we have already seen Docker swarm and Kubernetes objects called Services, leading to different interpretations. Regardless, for Cloud run, service is like a mixed bag of humanity's deployments and community services. It is an end point, as well as an orchestration unit of stateless workloads. Also, it is important to note the service created by cloud run scales automatically. This removes another burden from our fragile little DevOps shoulders. Moving on, we heard deployment platform, which is set to Cloud run by default. We have already seen what cloud R1 is. The other option is called Cloud run four and toss. And toss is for the users who want to host their containers on their own clusters, but still want serverless features for the end developers. And toss allows you to set up cloud run on your gk II cluster, on Google servers or on your own servers. We will stick to Cloud run and we'll pick a region. Next up. We'll provide a name to all service. Let's call it hello Cloud run. As we hit Next, we are led to a configuration options page. Google says that servers can have multiple revisions, but conflicts of each revision are unchangeable. This means whenever you make changes to any of the servers configurations like container image or port exposure, it will be served as a new version of your service. Unlike previously, where we could just Cube CDL apply, any change and deployments would get modified. First of all, this new approach provides great version control and revision accessibility. On top of that, since every change is a new version of the service, rolling out blue-green or January deployments becomes even more intuitive. Since all you have to do is manage traffic between two versions of a service, then we get to choose between using a container image from Google Container Registry of our project or from a source repo like GitHub. Second option is useful when we want to set up a continuous deployment pipeline. Here we want to stick to a single version, click on Select, and choose a demo container image called Hello. This is one of the Google's built-in image provided to every GCP project for enthusiasts to try out Cloud run. Click next. Finally, we get to choose like Compute Engine VMs or GK cluster. If we want to allow external connections without authentication. Set it to yes. Let's hit Create. We can see the status of the service being created, deploying the revision, setting up the access policies, routing traffic. Everything is done just a few clicks. The one and only revision is called Hello cloud R1 00001 hyphen staff. We also get a bunch of information about the container, like the image URL, exposed port number, which is 8080, in this case, into command, which is inherited from Docker images, entry boy instruction, and some resource allocation stats. Most importantly, write besides the name of our service, we can see the region that we had selected and the link where the service is being exposed. You can simply click on this link. And here we go. A beautiful little landing page by Google. You can do a lot more with Cloud run, as you might have already guessed. But that is conversation for another day. You can go back to the services page and see your service listed with the settings you had applied. Of course, you can select the service and delete it using the button on top when you don't need it. That would be it for this quick update. I hope you liked this sweet little introduction to this powerful tool. We're not done with serverless or cloud run yet. We'll be back with more bonuses and updates in future. To lend. As always, happy learning. And I hope you have an excellent day. 105. Conclusion: first off all a huge tanks toe all a few wonderful students who enrolled in this course believed in us and stuck till the end. We really hope that we sold you right, and we hope that the course met your expectations. If you like this ghost, please read us with five stars. If you think that the course wasn't up to the mark or it was missing something, feel free to let us know in the Q and A section for message or even with comments, we will definitely get back to you, and we'll try to solve your suggestions as best as possible. And if that satisfies you, kindly readers better. Your ratings will be a huge help, since it would allow other students to discover this course and be a part of this journey. With that said, See you with updates Happy learning.