Kubernetes & AKS 101 | Andrei Kamenev | Skillshare

Playback Speed


  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

12 Lessons (41m)
    • 1. Introduction

      1:31
    • 2. Docker Recap

      3:53
    • 3. DEMO: Docker Build, Azure Container Registry, Azure Container Instances

      6:05
    • 4. Kubernetes & AKS Architecture

      3:54
    • 5. DEMO: Create an AKS Cluster

      2:12
    • 6. Kubernetes Concepts

      8:09
    • 7. DEMO: Kubernetes Concepts

      5:21
    • 8. Namespaces, Resource Quotas & DNS

      3:04
    • 9. Taints, Tolerations & Affinity

      1:56
    • 10. Kubernetes RBAC

      1:13
    • 11. Storage Concepts & Options

      2:55
    • 12. Conclusion

      0:27
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

17

Students

--

Projects

About This Class

937dc796.png

With this class you will learn basic Kubernetes Concepts and Azure Kubernetes Service (AKS) specifics:

  • Recap on Docker images, Docker Containers & Azure Containers Ecosystem (Azure Container Registry & Azure Container Instances)
  • Kubernetes and AKS Architecture
  • Kubernetes Concepts (Nodes, Pods, Deployments, Services, etc)
  • Ingress and Ingress¬†Controllers
  • Namespaces, Resource Quotas & DNS
  • Taints, Tolerations, Affinity and Anti-affinity rules
  • Kubernetes RBAC
  • Storage Concepts & Options

Meet Your Teacher

Teacher Profile Image

Andrei Kamenev

Cloud Enthusiast

Teacher

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Introduction: Hello and welcome to Kubernetes and EKS 101 course. My name is Andre and I've been working as a Cloud Solution Architect for the last five years, helping different companies to adopt Kubernetes as well as Microsoft Azure in general. Throughout this course, we will build a simple app. We will package it in a container. We will bush of this container image to the Azure Container Registry. Off of that, we will build an AKS cluster on Azure. We will deploy our application there. And later on we will add an HTTP application routing using Ingress objects in Kubernetes. So here's the bland. First, we will refresh your knowledge on what the Docker containers and Docker images are. We will take a look at what services on Azure you can use for storing and run it in simple containers. Then we will talk about Kubernetes and Azure Kubernetes service architecture in general. After that, we will dig into Kubernetes concepts such as nods, boards, Deployment Services, and others. We will continue with Ingress and ingress controllers. We will talk about namespaces and how across namespace communication works, as well as the resource quotas. When it comes to namespaces, we will talk about ways to control your deployments with veins, celebrations, affinity and anti affinity roles. After that, we will talk a little bit on Kubernetes is our beck. And we will finish our course with storage concepts in Kubernetes ideas and different storage options you have when it comes to Azure Kubernetes service. So let's get started. 2. Docker Recap: All right, quick recap on Docker images and Docker containers. But before we do that, I want to point out that this is not a darker from the mental scores. And if you feel that you need more understanding of Docker concepts, check out different courses on the platform. Because here we will, I will do just the basic refresh. So what does a container? Everyone really likes to compare it to VMs and we will follow the same example. So basically the main difference is that in VMs, you virtualize your hardware like CPU, RAM, and storage. Whereas in containers, you're getting a higher level obstruction and you virtualize a Brayton system resources such as process tree, file system, and network interfaces. And Container Runtime makes sure that containers are isolated from each other was concept called namespaces and cgroups. With container comes the concept of container image. And then every image as a read-only entity which contains your application files or Braden system objects and application manifest. Image is something that you build first during the build time, and then you run it as a container during runtime. So from one image, you can run multiple containers. And that's really beautiful it because under the hood is consumes the very same image that is stored somewhere on the file system on your host. And that makes it super efficient. And this is possible because of layering file system that is used by Docker images. Once you've build your container image, you need to distributed somehow flawed that you use container registries. Do we teach you bush your images and then everyone else who has access to this registry can download them to use. This container registry can be in your datacenter or in the public Cloud. For that purpose, on Azure, there's a service called Azure Container Registry, or ACR, which is basically a managed version of a container registry that Microsoft manages for you. At this, combined with all those open source image standards, there is an obligation across multiple datacenters and a couple of additional features such as image sign-in integration with Azure Active Directory for authentication, and so on. What else can you use for Containers on Azure? There's a service called Azure Container Instances or ACI. And this is basically Azure's containers. Container as a service offering. Typically if you want to run a container, you either knew that virtual machine with some container engine installed such as Docker, or you need a full-fledged Kubernetes cluster. And ACI is some sort of a middle ground between those two. All you needed this to specify container image boards, amount of CPU and RAM needed. And you're good to go. And a couple of seconds he'll get that running container with or without a public IP address. It's up to you to configure. An ACI is really convenient if you want to test something quickly or you'll have really simple app that you want to run in a single or even multiple containers. And you basically don't want to deal with the whole complexity of orchestrators such as Kubernetes is. Another interesting scenario is bursty workloads. For example, you need to calculate something every night and you want hundreds of containers to do this work for you. You can pull it in automation or you can build additional pipeline to those. So in the next video, I will show you how to build a container image Boucher to Azure container registry and run it as a Container Instance. 3. DEMO: Docker Build, Azure Container Registry, Azure Container Instances: Before we dive into a demo, I want to show you the repo that we will be using throughout this course. Link to this reboiler is available in the course resources section. Basically, this is a repo where you can find all step-by-step guides to follow along. This is the one that we will be using in this video. Here you can find the prerequisites as well as detailed commands to use. So let's jump in and build an hour I mentioned deployed into Azure. But before we do that, let's check the source code of our app, our applications within and go. And as you can see, it is a pretty simple application that returns back in HTML, and it works on new port 8080. We also have a Docker file here that we will be using for building our image. Since that there's a goal and complication. We'll be using the Golan base image for our container. But first, let's check how it works locally. To do this, let's use the Goal run command. We will of course allow this. And let's go to our browser. Here. Let's go to localhost port 8080. And here it is, our app is running locally. Now let's containerize it. First of all, we need to download the base image for our app, which has gone and B and go Lang image in our case. Next, let's build our image where the docker build command. We need to specify a name for our image as well as a tag for Version 1. So there's Build, we can run it locally with the docker run command. All right, let's go back to our browser. If we refresh the page, we can see that the app is still running. But now it is running inside the Docker container on my laptop. Now we need to push our image to a registry. We don't have any registers yet. Let's create one. I already logged into my Azure subscription. So I will go ahead and create a resource group where we will put our registry. And next, let's create the registry itself. I'm specifying the registry name here, as well as the resource group name. We will use a basic SKU because it fits our purposes. And we will enable an admin user to make our lives easier for the demo purposes. It looks like it was created. Let's check it on Azure portal. I logged into Azure portal. Let's go to Resource groups. Here's our resource group. And here we can see our Container Registry being created. If we click on that, that we can see that we have a basic information here such as usage handle login server that we will use later on. Now, let's push our image to the registry. First of all, we need to read tag our image so that it corresponds with our register name. Next, we need to login to our ACR instance. And finally, we can push the image to Azure. Let's go back to Azure portal to check that our images there. If we click on Repositories here, hopefully we will see our image. And if we click on it and we can see the current version of our application. Now let's run our application on Azure Container Instance. To do so, we will use AZ container create command. As you can see, I specified the image name, Registry username, and registry buzzwords that you can find in the portal. And I also specify the port and the friendly DNS name here. All right, Let's go back to our browser to check if it works. Let's go to our resource group and hit refresh. We can see that there is a container instance created. Let's click on it. And here we can grab the FQDN and open it in a new tab. Remember to specify port 8080. Great, now our application is running on Azure and we are ready to move to the next lesson. 4. Kubernetes & AKS Architecture: Alright, Kubernetes. As you might already know, kubernetes has an orchestrator and it orchestrates containers. There are three main functions of kubernetes. First, you scheduling, making sure that the workload, it's lend properly on nodes that have enough resources to handle those workloads. Cycle is load balancing, which is pretty self-explanatory. And third, a service discovery, making sure that services are known by their names and even you replicable particular service is added. You don't need to worry about the evidence IP address. A consumer of this service, kubernetes, has a bunch of VMs or bare-metal servers called nodes, which are clustered together. There are master nodes and worker nodes. If you want to deploy something to Kubernetes, you interact with the Kubernetes API using Cube CTL tool. Or you can communicate directly using rest API. When you communicate with communities API, you basically pass on the state that has to be enforced by Kubernetes. And this is really important difference. You don't tell Kubernetes to do something. Instead, you declare the state in the form of YAML or JSON config. And it gives you an idea. It's his job to make sure that this state is enforced in the cluster. The configuration of that your best to Kubernetes is stored in a database called etcd, which is basically a pretty simple key-value store that there's a really a hard of a cluster and this is the only component in QBI does that the stateful, typically in production environments, etcd is replicated and can be located the master nodes, or it can be externalized. Another component of a must and all it is a scheduler. The scheduler looks for newly created pods and assign them to specific knowledge to run on the results. So controller manager, that there's responsible for various controllers such as replica controller that makes sure that you have a desired number of replicas. Node controller that is responsible for engineering those state, and so on. There is one component that does not listed here, which has a cloud controller manager that is responsible for interacting with external resources when you run your clustering the public Cloud. For example, for AKS, this controller is responsible for provision an Azure load balancer, Azure disks for storage, and so on. Homework notes, we have two components. First is Kubelet. It is an agent that runs on every worker node and implements a BAD specification. So basically cubelets runs containers and make sure that those containers are running unhealthy. Second component of worker nodes as Q proxy. This isn't that through proxy that runs on every worker node. And that makes sure that your pods are reachable. And it also applies network rules according to your configuration. In AKS, microsoft tries to hide the complexity of mass to know is behind what is called a hosted control plane or HCP. When you deploy AKS, you're getting a Kubernetes API to interact with. And so the worker nodes that runs your application, you don't have to worry about Masters at all. The communication between HCP and worker nodes is done through SSH tunnel. And you don't see it must announce hangover in your subscription. Create an AKS is very simple. You can do them in every possible way. Through Azure portal, Azure CLI, rest API, ARM template terraform, you name it. Once you create a cluster, you can install Cube CTL, get credentials, and you can start interacting with the cluster. Again, it's also simplifies the two operations for you. I'm great. And scaling can be done through one CLI command, one click on Azure portal, or by simply reapplying your infrastructure of that coal configuration. Now, let's jump into CLI and create an AKS cluster. 5. DEMO: Create an AKS Cluster: Let's create an AKS cluster. First, we need to create the resource group. And after that, we will use AZ AKS create command to deploy a cluster. After a case is deployed, we can install Cube CTL directly from Azure CLI and grab our credentials to make sure that everything works correctly. Let's try it. At least our worker nodes. On the portal site. We can check our resource groups. And under this Q AKS resource group, we can see our AKS cluster. If we click on it, we can see the basic info about our cluster, such as version API, server address, and network plugin. I want to cover networking topic in this course. However, you should know that today, AKS supports two different network and plugins, which are cube net that we have here and Azure CNI. Hey, guess also creates a separate managed resource group where we can find the underlying infrastructure resources such as virtual network, load balancer, IP address, and so on. As I mentioned previously, we don't see any master's here because mosquitoes are, they are hosted on the Microsoft side. If we click on virtual machines scale sets and go to instance, we'll, we'll see you that this is indeed a three-node cluster. All right, we have our cluster up and running. In the next video we will talk about Kubernetes objects. 6. Kubernetes Concepts: Now we're going to talk about Kubernetes concepts. There are definitely more than six months. We will stick to those because they are the most important ones. First one is anode working all day, usually referred to as nodes for simplicity, known as a VM that is able to run your pods null. It's going to have a lot of different conditions such as ready. And that is something that you want to senior cluster. That means that this node is ready to accept new workloads. Another condition is not radiating. And when you're not, there's another lady that can be because of memory pressure, the pressure network problems and so on. So you can get all of this info when you leased or notes or when you do the Cube CTL describe commands. And this is something very useful when you need to troubleshoot your cluster. Next concept is bald. While there's a group of one or more tightly coupled containers. And this is the smallest atomic units of operation and Kubernetes, if you want to run something in your cluster, you run it as a bolt by specifying and configuration file. Here's a simple example of an engineer's container. Specify that the one that BAD, you'll give it a name. And a specified that there should be one container created called symbol bud. And it has to be from the NGINX image. In most cases you will run one container per ball. But there might be cases where you desperately need something to be run side-by-side. And that's when you run multiple containers in a bulb. If you're not sure how many continuously need to run on the bug, most likely you just need to run one. Both can have multiple states, such as running minium that everything's okay. And this is typically something that you want to see in your cluster. Status can be pending, which means that container is accepted by Kubernetes. Scheduler does not created yet. And this can be for several reasons. For example, container image is still downloading or there is not enough CPU or memory lifting the cluster to check what's going on with a BAD, you can use Cube CTL describe bolt command. Status can also be succeeded, which means that your container successfully finished in job, which you typically see if you're on some jobs in your cluster. And a failed state is also related to run in jobs. Next hop is label. Communities is all about labels. Labels primarily serve two purposes. First, label is a way to describe a workload. For example, this app isn't a production environment or here's the team that is responsible for this app and so on. Second purpose of a label is to perform some actions from those. For example, you deploy this app with label App Engine X. And then when you want to expose this application, you will basically tell Kubernetes to create a service that will send the traffic to all boards that have this label. Next is a replica set. And this is a concept that is used to make sure that you have a particular number of pods running hidden depict create a replica sets manually, because there are usually created automatically when you create a deployment object. But it is important to know that this concept exists, especially when it comes to troubleshooting. Deployment does essentially a description of your application. You give it a name, your specified number of replicas, and you specify a bolt template for those replicas. You also said that board, that your app needs to listen. And under the hood, as I mentioned previously, Deployment Tool, create a replica set to handle replicas of your application. Once you deploy your app, you need to expose it. For that increment. It is you use a concept called service. And the service you specify a name, you specify a selector which is based on labels of your deployment, and then you specify a protocol. And this example, our app is running on board 9376 in the container. And the service will accept connections on port 80 and forward them to the target board of our app. Services in Kubernetes have different types. First is a load balancer uses when you want to expose your app externally from a classroom perspective. Cluster B is used for clustering internal communication. And there is also an old port that maps your service to a particular Knowledge API, which is not used widely and have some niche use cases such as gaming, where low latency super important. Now let's talk about service types in more detail. So first, cluster I be, let's say you have an app that is used by other apps in your cluster, for example, back-end or API application. And that gays, he used cluster a B, and you're good to go. If you leased her services. And in cluster you will see that there is a service with no external IP. Next is load balancer. As I said, load balancer is used to expose the service externally from a clustered perspective. And this is important. It doesn't have to be public IP address. When you create a service of type load balancer, you can either use internal load balancer for your users from internal or corporate network, or you can use external load balancer for public access from the Internet. What does it look like? An AKS. When your lists, your services, you will see those APs in the external IP section, no matter if they are private or public. So here you can see that we have these backend service called Azure world back, which is using Cluster API. We have a front end for our internal users of type load balancer, which uses the private IP address and then have our front end for our external users. So if typo balancer, we chooses the public IP address. When you open the Azure portal, you will see that the route to different load balancers, internal and external, and they're used for respective service objects in AKS. If you want to use external or public load balancer, you just need to configure a service of type load balancer. And that's it. Public is a default behavior for AKS. So behind the scenes, azure will create an additional public IP address for you and assign it to a public load balancer. If you want to do billing journal balancer, then you need to explicitly specify it in the service annotation. When did you play this internal service Azure, we'll create an internal load balancer for you and assign a private IP address to it. So this is how you do the Layer 4 route in an AKS. But what if you wanted to do layer 7 or HTTP application routing? To do this in Kubernetes, there is a concept that is called Ingress and ingress controllers. Ingress has a Kubernetes API that managers external access to your services. The difference in a Bolsa service is that Ingress works on Layer 7 instead of layer 4. When it comes to Load Balancers. This allows you to have one public IP address for ingress. And then you can route traffic to different applications based on HTTP headers. This is also coming in from a security perspective because you use ingress is a single entry point to a cluster and you terminate SSL. They're Ingress Controller at the same time. He's just a bug that implements the ingress logic. There are tons of different ingress controllers options in the market, and NGINX is the most popular one. There are also others like Ambassador or traffic, Istio and others. If you're done though, which one to use? Start with the Engine X. And later on, if you understand that you need sound specific functions that are missing an engine acts, they can look at other options. So here's an example of ingress definition. Here you can see that there is a TLS spec and the secret reference where I store my certificate. There is another way to do this. For example, you can configure ingress to work with. Let's Encrypt to make your life easier. Next, we have rules for the main name contoso.com. So basically for contoso.com slash a, we will route the traffic to a service a. And for contoso.com slash B, we will route the traffic to the service B. Additionally, we also can configure rules for sub-domains which can be served as C look contoso.com, which is also configurable in the same Ingress configuration file, but it does not included here. And point that null here is that Ingress doesn't have to be public. Just like load balancer. It can be in journal and serve your internal users. Now let's go to a terminal and deploy our application to AKS. 7. DEMO: Kubernetes Concepts: Let's start with a simple bot deployment. So here's this pod spec which we'll be using. So nothing fancy here, just a simple container with an engine next image. To deploy this spec, we use Cube CTL, apply command followed by the path to our respect YAML file. Once this created, we can check if it's run into with Cube CTL get pods command. Now let's go ahead and deploy something more complicated as our welcome app that we containerize previously. First, we need to make sure that our ACR or Azure Container Registry is accessible from AKS cluster. Typically create the secret that stores username and password. But an AKS, there is a simpler way. You can attach your ACR to an AKS cluster, which I will do with AZ AKS. I have a date command followed by attach ACR perimeter under the clue that creates a special managed identity that our cluster will use to download images from ACR. Go deploy our app. We need to use to Kubernetes objects, deployment, and service. Here's a deployment YAML file where we specify labels for our deployment. And we specify that we want to have two replicas of our application. Configure selector or Bayesian labels. And towards the bottom of the file we specify the container image can damp board and how much resources are I have consumes. This Resources section that is important to make sure that scheduler knows where to put our application-based in the resources. If you don't specify resources, you can end up in a situation when one obligation is misbehaving, consuming a lot of resources so that your cluster is running out of memory. Make sure that you always configure resources for your deployment, has it, this is an absolute must and the best practice to expose our app, we also have a service definition. In our case, we specify names and labels and we also want to make sure that our app is available on port 80 and port 8080. So that's where I'm going to figure that everything that comes to port 80 on the service will be sent to our pods targeting, bought a TAG. Now let's go ahead and deploy our app. So we will go ahead and apply our YAML manifests. Alright, now we can check our deployment. And as we can see here, we have two out of two replicas in a ready state. We can also check our buds to make sure that they're running. And now we can update our public IP address to check our welcome app. Great. Now we have our app running in AKS and there's available for our users. Now, why don't we understand that we need to scale our application. Let's go ahead and add more replicas. So those, so we can use Cube CTL scale command where you specify your deployment name and a desired number of replicas. If it will list the bodies, we can see that now we have four copies of our app in the cluster. All right, now let's make our application more fancy and add the ingress relative for it. The easiest way to install Ingress Controller is through Helm. I will not be covering specifics of Hellman this course. Holding into know that Helm is another way to deploy applications with a built-in DOM length and mechanism. But for now you can go ahead to the helm website for an installation instructions. Once you have Helm installed, we need to add repository for the Engine X. And after that, we can go ahead and install it. Looks like it wasn't installed. So now we can list our bads to check if it's there. And we can also list our services to grab the public IP address that is used by this Ingress controller because we will need this in our English definition. Here's our ingress definition. Again, we specify the object guy and we give it a name. And here onto the host section, we placed our IP address followed by NAEP IO. And you can reasonably ask me, why does anybody WHO service and that kind of resolve this particular DNS name to a public IP address. This is good for testing and for demo purposes. However, if you deploy an Ingress in a production environment, of course, you will need to use full-fledged DNS record associated to a public IP address. But let's use anybody off for now and deploy our Ingress. As usual, all we need to do is apply. And after that we can list our ingress objects to grab the My name and check if it works in our browser. Awesome. Now we have our app running and answered by Engine X. However, right now it does not really secure because we are using HTTP. Next step would be is to enable https. And this is something that I will leave to you. Go ahead to the GitHub repo that I mentioned in the beginning of a course where you can find detailed instructions on how to enable SSL for ingress. 8. Namespaces, Resource Quotas & DNS: Before we dive into namespaces, let's talk about two common isolation by their Kubernetes. Imagine you have multiple teams and you'll want to have multiple environments. What you do, the first option is a physical isolation. You have different clusters for different purposes in teams. You have deaf cluster, you have staging cluster and production cluster for different teams or products. Second option in communities whose logical isolation where you have multiple environments in one cluster. And this example you have Devin stage cluster for multiple teams and separate production cluster. So what approach should you choose? I'd personally recommend chosen physical isolation because it's way easier to set up. Whereas the logical isolation, there is a load to configure and there are other ways to do something wrong. Moreover, if you use AKS, you don't have to buy from us to nodes. So had an additional clusters once really cost you more than heaven, one large cluster. Of course, there are certain scenarios where you want to have logical isolation. But if you're not sure what to choose, choose physical isolation pilot. Now, logical isolation is done by namespaces. Namespaces, or just a logical boundary where we deploy your applications. It is important to note that namespaces by themselves, they do not isolate the new thing is just a logical boundary. If you want to isolate your workloads by XML namespaces, you have to configure additional stuff such as nitric boluses for network in isolation, proper our basic rules for access control and proper glottis so that one namespace cannot eat up all the resources and so on. Just like you can specify resource quotas for boards, which is a best practice. You can also specify quotas on a namespace level. Let's say you have multiple teams working on the same cluster. And you wanted to make sure that every team has a specific resource limit on their deployment. In that case, you can specify a resource quota on a namespace level. When resource quantity is enabled, users must specify a resource limits on their deployments to which kind of enforces the best practice. When it comes to namespaces, there are specific DNS patterns that Qur'an is follows. In every cluster, you most likely will have an internal DNS represented by Core DNS service. Every service and the bug in the Gloucester gets its own internal FQDN for blood, it is a body B dot namespace, dot dot Gloucester, that local. And for services, it is service name, note namespace though the SVC look cluster, not local. You can also reach out to a service by its short name. If you requested within the same namespace for internal communication between microservices, it's a best practice to use FQDN since died of IPs. Because he or service or Bot API can change. But FQDN reminds him the same as long as you keep the configuration stable. Those genus names, they are internal to Kubernetes, which means that if you want to expose the service externally, as you already know, you need to use load balancer or ingress. 9. Taints, Tolerations & Affinity: Now let's talk a little bit about advanced scheduling in Kubernetes, Danes and colorations. The idea is to specify a certain condition under which his specific workload can be or cannot be scheduled on a specific node. Basically, it puts something similar to a label on an old saying that it is GPU NOL schedule and nothing will be scheduled on this node until we specify that your body actually tolerate some of these stained. There are three effects. First is no schedule, which means that no new bonds will be scheduled on this node. However, if there is something already Ryan there, it will continue running. Second one is preferring those schedule, which is the same as the first one. But it is a soft limit, which means that Kubernetes will prefer to not schedule Monday, but there are no nodes left. It will schedule a workload anyway. And the third one there is no executed which will remove all of the running boards that do not tolerate this specific day. And another way to do some advanced scheduling is to place a label and a node and then use node selector property in a boat configuration to make sure that this BAD lens on particular node. In this scenario where there are no restrictions for other bots to be scheduled on this node as well. So usually things deliberations and node selector are used together to make sure that specific both and only these BAD lens in a particular node. Now the affinity is pretty similar. Material can specify if the requirement is hard or soft, which gives you a little bit more flexibility. There is also an interrupt board affinity and high affinity, you can specify if you want this specific what that was scheduled on the same node or the opposite, you don't want this specific. What do they run alongside the other? Let's say you have a front-end board and a back-end BAD. You may want to schedule them on the same node to decrease latency. Another example, if you're on multi-channel solution and you want to avoid the board of a customer a, run on the same node as a customer B. In that case, you can use anti affinity rules to enforce in this behavior. 10. Kubernetes RBAC: Carbonate is our beck and communities. So there are three main objects when it comes to our BEQ, which are roles, role bindings, and users or service accounts. Roles and role bindings can be clusters called the namespace cold. So the idea is that you describe a role and then with a role binding an object, you assign this role to a particular user group or a service account. Important to note here is that Kubernetes is do not serve as identity provider. You can only create service accounts and Kubernetes, but if you need to give specific permissions to users or a group of users, you have to rely on an external identity provider. In AKS, this provider can be Azure, I can directory, which you can enabling your cluster by age in this special parameter during the creation time. On Cabaret decided this is how you define roles and role bindings. On the right-hand side here we configure a role for a BAD reader and giving them this role permissions to list, watch, and get pods. Once we have this role, we can then assign it to our user with the role binding object. As usual in Kubernetes, you just apply this files and objects will be created for you. 11. Storage Concepts & Options: Storage. As you know, Docker containers are supposed to be stateless. And if you need to have something persistent, you need to attach a persistent storage to it. In communities with works. Similarly, there are several objects associated with the storage, which are storage class persistent volume and the persistent volume claim. Storage class is a way to describe dipole storage you have in your cluster. You want to have separate storage classes for SSD or HDD for production or test depending on the resume and CEO, phenylalanine infrastructure and so on. A lot of storage software vendors nowadays provide storage drivers for Kubernetes, which means that you can install those drivers and consumes storage as a native Kubernetes resource. Azure is no exception by default, when you deploy AKS, you will already have for storage classes, azure disk, azure disk premium, azure File, and Azure File. A premium important thing in a storage class is reclaimed policy. It can be delete or retain. Basically, this describes what will happen to the storage once they're bought that was using it gets deleted. By default in Azure or this policy is set to delete. So if you want to keep your discs off to the board was deleted, you should create your own customized storage class. Once you have a storage class, you can create a persistent volume claim, usually referred to SBDC. This is basically the storage instance itself where you describe what storage class you want and how much space you need. And finally, you add that to this PVC through it bought by specifying the BBC name and a bath where this volume should be attached. This is a flow of dynamic provisioning, which basically means that you dynamically provision storage using Kubernetes, his native APIs. You have storage class, you create PVC from it, and then you attach it to a board. Alternatively, you can use static provision. That means that you can create, let's say Azure, this can be Azure portal and then use it this URI to attach this to a board. In that case, you don't need to utilize storage drivers, storage classes, or even create PVCs. You basically decouple storage from Kubernetes. Both minutes, dynamic and steady provisioning has their own advantages and disadvantages. But typically dynamic provisioning is a way to go. When it comes to Azure Kubernetes service, there are a lot of different options. When it comes to storage. You can use native Azure disks and Azure Files. You can build your own storage and double Vacas with a record store address, or you can use something else. Those solutions are not really easy to maintain, to be honest. There's also a NET app integration on Azure. So if you really want high performance benefits chair, you can use it as well, but it can be a little bit more expensive. Last option is to build your own storage outside of a cluster on VMs and use it statically, or even use the drivers for the next provision, if your storage of choice supports it. 12. Conclusion: Congratulations on finishing the course. Now we have our app containerized and running on Azure Kubernetes Service. You also know how to do some advanced stuff on Kubernetes with schedule in securing the axis and how it there I'm persistent application senior cluster, of course menu of those topics deserve a separate course. But now you have an overview of what's available in Kubernetes so that you can dig deeper in any of those topics if you need. So thanks for watching and see you later.