Helm - The Kubernetes package manager hands-on course | Ahmed Elfakharany | Skillshare

Playback Speed

  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Helm - The Kubernetes package manager hands-on course

teacher avatar Ahmed Elfakharany, The DevOps guy

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

29 Lessons (3h 2m)
    • 1. What is this class about?

    • 2. What is Helm and why should we use it?

    • 3. Installing Helm

    • 4. Getting the Helm chart

    • 5. Installing the Nginx Helm chart

    • 6. Customizing chart installation values

    • 7. Upgrading and deleting Helm charts

    • 8. Exploring how Helm runs through the dry-run flag

    • 9. Inspecting a Helm release

    • 10. Helm history and rollback

    • 11. Helm Install and upgrade tips and tricks

    • 12. Helm chart craft 101

    • 13. Helm templates primer

    • 14. Packaging our app in a Helm chart

    • 15. Helm templates playground

    • 16. Helm template functions

    • 17. The .Files Helm method

    • 18. Helm flow control

    • 19. Helm looping with "range"

    • 20. The helper.tpl file and named templates

    • 21. Helm chart dependencies

    • 22. Helm library charts

    • 23. Build your own Helm repository

    • 24. Hosting your Helm repo on a web server

    • 25. Helm repo hosting on Chartmuseum

    • 26. Helm S3 plugin (AWS)

    • 27. Build your own Helm plugin (helmscp)

    • 28. Use custom protocol for Helm chart downloads

    • 29. Helm starter charts

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.





About This Class



  • Administrative access to a Kubernetes cluster (we use minikube in the labs)
  • Basic knowledge of Kubernetes and kubectl commands
  • Familiarity with Linux and the shell
  • Familiarity with Docker basic commands
  • Basic understanding of the YAML format
  • Watch the course content on a laptop and practice the commands with me

Who is this course for?

  • Aspiring DevOps students who want to add Helm to their toolsets
  • Ops staff who need to speed up and control complex Kubernetes deployments
  • Developers who are tasked with deploying their apps to Kubernetes and need to automate the process
  • Kubernetes students who want to get hands-on experience with Helm
  • IT managers who need to assess the value of Helm as a tool to be adopted in their teams

Helm is a tool used to package Kubernetes manifest files that are used to install a cloud-native application. Deployments, Services, Ingresses, ConfigMaps, etc. are all packed into a Helm chart. Using this Helm chart, you can deploy the app to a Kubernetes cluster the same way you use apt-get in Ubuntu, or brew on a macOS.

After completing this course, you will have a working knowledge of Helm. You'll be able not only to use ready-made Helm Charts to automate day-to-day deployments, but you'll also automate the most complex Kubernetes deployments and contribute them to the community.

I've designed this course to focus on the important parts of Helm. I did my best not to bother you with boring material that you'd seldom use in your day-to-day life as a Helm and Kubernetes engineer. Instead, I give you the core stuff of the tool together with some tips and tricks that will let you code Helm charts like a pro in no time!

To get the most out of this course, I highly encourage you to open your laptop and do the labs that I explain in the class. There's nothing better than getting your hands dirty learning a new tool or technology. That way, by the end of this course, you'll find yourself already developing, applying, maintaining, and even sharing your very own Helm charts.

The best way to learn any tool is by using it! In this course, we'll work together to deploy ready-made Helm charts to Kubernetes using Helm. After mastering that, we'll start analyzing Helm chart bit by bit. Along the way, you'll learn the following:

  • Understand why we need a package manager for Kubernetes

  • Deploying Helm to minikube (local Kubernetes cluster)

  • Understanding Helm repositories

  • Adding one or more Helm repositories to your system

  • Searching the Helm repository for your desired Chart

  • Using Helm to deploy ready-made Charts from popular repositories

  • Inspecting a Helm Chart deployment

  • Upgrading a Helm deployment and viewing its history

  • Customizing the Helm Chart to your own needs by modifying the values file

  • How (and when) to create your own Helm Charts

  • Understanding Helm Templates

  • Testing your Helm templates without applying them using the dry-run flag.

  • Revisiting Helm history by upgrading and rolling back package deployments

  • Using Helm functions (include, indent, nindent, toYaml, b64enc, and more)

  • Decision making using conditional and logical statements (IF, NOT, AND, OR)

  • Loop through simple and complex objects using the "range" keyword

  • Deep diving into Helm variables

  • Debugging your Helm charts

  • Creating your own Helm repositories and pushing Charts

  • Deploying even more complex Kubernetes environments using Helm Chart dependencies

  • Learning about popular community-based Helm projects like Chartmuseum

  • Extending Helm by building your own repositories

  • Exploring different Helm plugins to automate repetitive tasks and store charts in the cloud

  • Build your own Helm plugins and use custom commands and protocols

  • Configure Helm to create your own specific boilerplate charts using Helm starters

Meet Your Teacher

Teacher Profile Image

Ahmed Elfakharany

The DevOps guy


My name is Ahmed Elfakharany. I work as a senior DevOps and cloud engineer. I'm very passionate about new technologies so I constantly keep reading and learning. 

In my work life, I've worked in several IT roles including web development, system administration, cloud computing, and operations. I've used languages like Python, PHP, Java, C#, Go, and Ruby. Web technologies like HTML, CSS, JavaScript, ASP.net, JQuery, AngularJS, Laravel, CodeIgniter among others.

I like learning and passing on my knowledge to help others.

See full profile

Class Ratings

Expectations Met?
  • Exceeded!
  • Yes
  • Somewhat
  • Not really
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.


1. What is this class about?: Hello everyone. My name is Ahmed. I'll be your instructor in this class. I am a senior DevOps and Cloud Architect. I've spent several years of experience in this industry. I worked in startups, small, medium-size and enterprise companies, as well as a freelancer and a technical writer for some of the bigger players in the game. So I believe you're watching this video for one of two reasons. Either You know what Helm is, but you want to know whether discourse we'll teach you more about it or not, or you don't know about Helm at all. Maybe you have heard about it since you are working with Kubernetes. So you want to know if this course is for you. So let me quickly tell you what this course is all about and how you can use it to achieve the maximum results. So Helm is the Kubernetes package manager. It is like what APT is to do when you run apt install and genetics, for example, behind the scenes, the applicant Mandela loads files, configuration files, binary files, all kinds of directories and files and places them in their locations, sets, their permissions, stores the service configures it just automatically on system startup and so on and so forth. Helm does the same thing. But for commonalities, some people say that if kubernetes is the cloud operating system, then helm is a package manager and I believe they aren't correct. So in this class, I'm assuming that you don't have any background about Helm. We're going to start from the bare minimum from the scratch. We're going to start by using the readymade charts. We're going to use the Engine X helm chart and see how we can download it, install it, configure it, upgraded wallet. Back then we start creating our own Helm packages which are called charts in held terminology. We're going to learn all about the anatomy of a Helm Chart, how he created, what the various files and discharge are therefore, and how to use it to your own requirements. Then we're going to denote into little more detail. So we're going to start crafting our own templates, learning all about template functions like the indent to YAML files, flow control with if conditions, ranges and so on. We're also going to learn about the helm special files like the helpers file, the nodes that text file, and the charter YAML file. We're also going to learn about Helm dependencies on how one helm chart can be dependent on other Helm charts. And we can install them all in one go. We're also going to learn about Helm plugins and we're going to use the AWS S3 plugin as an example. Then we're going to finish up by learning about Helm starter chores. This course also includes some bonus lectures. A bonus lecture contains some advanced topics that are not strictly required for you to learn helm and be proficient in health. However, they provide much value added to your knowledge. If you want to skip them, you can skip them. You can safely skip them. You will still get the most out of this class. But if you watch them, you'll get even more and more knowledge and you will be more proficient in held. So for example, in one lecture we studied the S3 plugin. In the other bonus lecture, we learned how to create our own plug-ins using the Go programming language. So now you'll learn about what this course is all about. Let me spend a few seconds explaining how to use it to get the maximum benefits. So I assume that you are watching this class on your own laptop. And as I also assume that you do have access to a Kubernetes cluster, whether that was a mini cue Cluster installed on your laptop or a Cloud-based cluster or AWS or EKS or Google or whatever, even on a virtual machine or a bare metal. And I don't care as long as you have administrative access to a Kubernetes cluster, you are more than good to go. So that was all for this video. If you think this course for you, go ahead and click on the enroll button. See you in lecture one. 2. What is Helm and why should we use it?: Hello and welcome to helm, the Kubernetes package manager hands-on course. In order to learn about Helm, you need to understand first why it came into existence. So helm is a package manager. But what is a package manager? In Linux, for example, we use package managers to manage software installations. That is, install, manage, and when necessary, uninstall an application. Let's have an example in a B12 if we need to install Andrew Next, the well-known web server and reverse proxy, we'd issue the following command. Sudo apt install Enter next in a few seconds and an x will be installed on our system. We can double-check that we had a working Enter next installation by opening a browser window and go to localhost. We see the AndroidX welcome beach. But behind the scenes, the app package manager has done a lot of work to make this happen. For example, we have an Andrew next configuration file which has the default values deployed to slash indices slash slash index.com. We also have the next binary under slash usr slash sbin slash Andrew Next. Additionally, we have the startup script under saturated see slash system d slash system multiuser target was slash index dot service, which contains all the required command line options or Andrew, Next we started, reloaded, stopped and we started. All of this has been done automatically for us. As soon as he ran apt install engineers. There are different package managers for different operating systems. For example, the Ubuntu you just apt for Doro uses young or DNF. Mac OS has homebrew, windows have chocolate, scoop and others. Kubernetes is often referred to as a cloud operating system. Couldn't ideas enables you to deploy distributed microservices through containers on almost any environment. So let's say that we need to deploy Andrew next to the Kubernetes cluster. At least we need the following. A deployment that will manage the Internet spot or a service and ingress the ConfigMap to hold a server configuration. Some secrets and a service account. Some are back rules with a simple command like Helm install Android X. The whole deployment is automated for you. In Helm terminology, a package is referred to as a chart. It can faced an accessory files that would install the application to communities. They can be written in YAML or JSON formats. In discourse, you'll be sticking to YAML as it is more human-readable and easier to explain. Well, that was all for this lecture. Thanks for watching and see you in the next one. 3. Installing Helm: Hello everyone. In this lecture we are going to install Helm on our machine. Let's go to helm dot SH and click on Get Started. Have a look at the installation prerequisites. You need to have a working Kubernetes cluster beforehand. In this lab, we use many cubes since it can be installed on your local laptop. However, that every same procedure can be applied to any Kubernetes installation. Okay. Then we move to install Helm. Helm is written in Go, which means that it's just a self executable binary file. It does not need any dependencies to work. So the easiest way to install Helm is to just download the binary file that works for your system. Those can be found in the official release of speech. The latest helm client at the time of this recording is 3.6.3. This is the Virgin that we'll be using in this course. I'm not, I've been to Linux, so I'm going to click on Linux, AMD 64. This will download a tar archive that contains the binary file. Let's go to our terminal in uncompressed, our download a file. The archive contains a binary file in addition to the license and a read me file. Actually, this is all what we need to install Helm. In fact, we can just start using it right away. For example, let's run Helm version. As you can see, it is working. However, to avoid having to type the whole past our Helm binary whenever we need to use it, Let's move it to a location in our policy. For example, pseudo ambient Linux, AMD 64 slash to slash usr slash local slash bin. Now, from anywhere in our system, we can run home and it will just work. This is by far the easiest way to install Helm, but it has one downside, though. Whenever there is a new Helm version, you will have to go and manually downloaded from their GitHub Releases speech. If you want to always have the latest version, then you should use your OS package manager to install help. To do that, we go back to the Quickstart page and click on the installation guide. Scroll down to through package managers. Now, depending on your OS, you should find the appropriate installation method. So we have Homebrew for Mac OS, chocolaty for Windows, and APTT or Snap Core Ubuntu. Finally, you can also build Helm from source if you want to test the latest code even before it gets officially released. But in that case, you need to have a working Go lang environment. Well, that was all for this lecture. See you in the next one. 4. Getting the Helm chart: Hello everyone and welcome to the second section of this course. In this section we are going to examine the basics of helm by deploying Enter next to our Kubernetes cluster using a helm chart. In the process, we'll learn the following. How to add a chart repository to your system. How to search this repository for your desired application? The how do you install this electric chart through Helm? And how to see you what was installed for you and validate the chart deployment. How to upgrade this installation to a newer Gruden. And finally, how to completely remove the installation from your cluster and perform a cleanup. So without any further ado, let's start our lecture and discuss trot repositories. As mentioned in the introduction lecture, they Helm package is referred to as a chart. It contains all the commonalities, resources that are needed to deploy an application. Those files are stored inside a directory that holds the charts name. So for example, if we were to install the WordPress helm chart, we'd find a directory named WordPress that contains a number of files and directories. We are going to examine the contents of the chart later on. But for now, just need to know that a chart is what Helm calls its package. And that it's just a directory with a well-known structure. And it holds the same name as the chart. Since helm was introduced, many people and organizations created and shared their own talks. Ten, hundreds of charts out there. There needed to be a way to organize them so that they aren't easily searchable and accessible to users. This is where talk repositories came to be used. There are many helm chart repositories on line. A repository is simply a web server that contains an index.html file which lists all the charts included in the repository. The chart itself is stored in compressed format. It is possible to store a charge compressed file on another server, then the one containing the index dot YAML file, for example, and S3 buckets. However, they are often stored on the same server as the index file. With many charge repositories are already available online, some web services curated and indexing them. Perhaps the most well-known one is the Artifacts hub. The artifact top contains many CNCF products, artifacts like Cube, CTL or DNS plugins, OPA policies and Helm charts and plugins. So since we want to deploy Andrew Next, let's search for data. As we can see, there are many repositories that list Enter next as one of their charts. The one with the most stars is the Vietnamese people. So let's click on that. In order to use the Bitnami repository to deploy Enter next chart, you need to add the repository to your system first. This is very similar to adding an app repository to your Ubuntu system when you want to install a package that is not provided to the default repositories, except that Helm does not come with a default repository. So we need to add our own. To add the repo, you use the Helm repo, add command followed by the name we want to give to the repo, then the URL that points to that repo. We can double-check that we have this repo in our list by running Helm repo list. Now, that was all for this lecture. See you in the next one. 5. Installing the Nginx Helm chart: Hello everyone. In the previous lecture, we started our Enter next chart installation by adding the Bitnami repository to our system. As mentioned in a previous lecture, a repository contains several Helm charts in compressed format. They are listed in a file called index.html located on the WebServer where the Repository exists. Since we have a link to this repository on our system, we can use it to query It's conference using the Helm search repo. For example, let's search for Andrew Next. Notice that this command search all the helm repositories in your system for the keyword you specified. So if we only had Bitnami for the time being, all the results that come from the source. It turns out that bitmap contains more than one repository with the keyword enter and exit the title or the description. So we have the Internet web server, the one that we are interested in. And we also have the Enter next Ingress controller and Kang, which is an API gateway, which is built on top of engineers. By default, the next chart represents the latest available burden of both the chart and the application that it provides. It's important to understand what each version represents. So the trod virgin here is 9.4.2. This represents the latest edition is done by get NAMI through the chart. Those might be bug fixes, security patches, or a different application version. While the advert and represents the Enter next burden that discharge accommodates one, going to 21.1. If we go to the Internet's official downloads page, we'll find out that indeed this is the latest released enter next version. However, many times you are interested in installing a specific version of Andrex or perhaps more than one version at the same time. Let's see if the Bitnami repository hosts older versions of Android next, run the same command, this time passing the hyphen, hyphen virgins command line flag. Since our keyword matches more than one helm chart, we have a lot of search results. Let's narrow down our search by changing the keyword to be Engine X server. As you can see, the Bitnami repo is hosting charge that offer the Internet web server from the latest all the way back to version one, coin 10, 12. Now that we have examined a bit NAMI repository, let's see how we can install the latest available Enter next chart. Before we proceed, I want to make an important notice. The helm client uses the same configuration that your Cube CTL Clyde is using. To explain more. Let's run Cube CTL config, get contexts. We are currently using the mq cluster and are pointing to the default namespace. This is the exact same info that Helm we'll use to install a chart. We don't want to install Andrew Next, default namespace. So let's create one. Keeps video, create NS web. Let's change the current context to point to the web namespace. Let's double-check that we point to the correct namespace. Now, to install the chart, we use the following command, helm install. Then we supply the name of the installation and genetics 001 then the name of the chart preceded by the repository name Bitnami slash AndroidX. Notice that we needed to supply the installation name enter next 01, because helm allows you to install several instances of the same chart to the same cluster at the same time. The installation name here is a way to differentiate between different instances. The chart is configured to display and mice help message that instructs us about how to access Andrew Next, let's start by examining the status of the end index 0 and service that was created for us. Notice that the chart automatically names the service and genetics 001. This is intentional to avoid naming collisions if you decide to install other instances of the same chart but a same namespace. The service is of type load balancer. And since we are using mini cube, a load balancer will not be created for us. So the status will always be ending. But we can still access our Enter next installation through the NodePort. Let's first get the IP address of the node by running many cube service, hyphen, end web, hyphen, hyphen URL, Andrew, next one. If we navigate to the URL displayed, we see the Enter next welcome page. To know what the Helm chart installed on your behalf. You can just run Cube CTL get 0. We have a deployment that created a ReplicaSet, that created a pod. And we also have a service that brings us to the end of this lecture. See you in the next one. 6. Customizing chart installation values: Hello everyone. In the previous lecture, we installed one innocence of Andrex in our cluster. So we have one deployment and one service. However, we often need to install more than one instance of a particular installation. For example, what if you have another website that we want to deploy on the same cluster and it needs an engine ECS instance of its own. Let's also assume that this website requires Enter next 1.21, which is not the latest. So we need to know which chart virgin hosts this advert inverses. We know from the previous lecture that we can use Helm search reco, hyphen, hyphen vergence Internet server to get all the hosted Burgess. In the chart. We have more than one chart version 4 Enter next 1.21. Let's be the latest, which is 9.3.5. Our Kubernetes environment is currently configured to target the web namespace. But we need to install our second Enter next instance to the default namespace. We don't need to modify our current context as helm allows you to select the namespace to use for installation. So based on the new requirements, we can use the following command to deploy and genetics again, helm install Android X O2 and high conversion 9.3.5 hyphen and the fold. We don't need to check the default Enter next page and the browser. Instead, let's ensure that we have the correct Andrew next version. We get the pod name by running Cube CTL, get pods, hyphen m. Now let's open a shell to this bond. And from inside the container 1 and 2 next hyphen v. And we have the correct Andrew next version. So we have now to Andrew next installations in our cluster. But there are many decisions that help already made on our behalf. For example, the service is of type load balancer. Also enter next is listening on the default port 80. In addition to many other parameters that Helm has already configured for us. Depending on how the charge was created, you may have many options to choose from. Let's go again to the artifact hub and scroll down to the parameter spot. Since there were a lot of configurable parameters for the next chart, they have been classified by their type. For example, we are interested in changing the service type from load balancer to know what board. The required parameter is located in the traffic exposure parameters. The service type of force still low betas are, we need to change that to note board. Also the HTTP port defaults to 80. Let's assume that we need the server to listen on 8080 instead. That's two parameters we need to change. But how in Helm, those parameters are referred to as values. And there are two ways to modify values in a chart. The first one is by using the values dot YAML file. Let's create one. When we deploy the chart from the repository, this file is already populated with all the configurable parameters with their default values, as we saw in the artifact topic. But you can override those parameters as you please by introducing your own values file with the parameters that you want to change. For example, we need to change the service type NodePort. So we write the following service type, NodePort. Notice how we write our YAML format. The service here is the parent, as we can see in the specification, wild-type is the child since it comes after the dot. Save the file. Now we need to make a third Andrew next deployment to our clustering. Let's host it in the web namespace. We don't care about the rodent, so let's install the latest, but we need to instruct Helm to use our values dot YAML file. So we add the hyphen, hyphen values, values dot YAML, which is our values file. This is one way of changing the chart parameters. The second way is by using the hyphen high can set command line flag. We mentioned that we needed to change the service type denote board, and also the service port 8080. Let's change the service port using the command line flag hyphen, hyphen set, Service vault port equals 80. Notice how we use the dot notation here on the command line instead of a YAML format. Finally add the chart name prefix of by the reco Bitnami slash and an x. Now if we examined the service, we'd see that it is of type NodePort. The container is listening on port 8080. Finally, it's worth explaining the hierarchy of values in Helm charts. So the parent chart valleys are the first in the list. They should contain all the same default. They are overridden by any values file in a sub chart or a trial chart. We are discussing some charts later in this course. Those can be shadowed by any values passed on the command line using the hyphen, hyphen set command line flag, which has the most influence in defining values. Okay, explain it better. Consider that we have service to the poor defined as 80 in the outermost parent values file. Then a subtract redefine this value at 8888. Now this value shadows its parent and a service will listen on port 8888. Finally, when applying the chart, the user use the hyphen, hyphen set command line flag to choose 8080. Now, this value shadows both parents values and the service will eventually listen on port 8080. It's very important to understand helm values hierarchy to better design your values when building your own charts. Well, that's all for this lecture. See you in the next one. 7. Upgrading and deleting Helm charts: Hello everyone. In the previous lecture, we deploy three instances of our Enter next chart to a cluster. We can view those installations by running the helm List command. But notice that this command is showing the installations we deployed in the web namespace only to the EU all helm installations cluster-wide, you need to pass the hyphen, hyphen all namespaces black. We also change it some of the default parameters of the engine. Next, Helm charts, for example, we changed the default port and service type. But what if we want to upgrade the existing tort to use the new service type and port. You are greater helm chart when you upgrade either Detroit virgin or the configuration or both. So we can upgrade our Internet 001 installation to use bought AAA and the NodePort service type, or we can upgrade it to use a different chart burden. But in all cases, we are upgrading a specific innocence of the installation. Other installations remain intact and also the charge source is not affected by this change. Now, let's, I'm great. Enter next 001 to use port 8080 and change the service died from load balancer. Denote board helm upgrade. Engine X is 0, 1, which is the installation name, hyphen hyphen center service type equals no tort hyphen, hyphen set service port 8080, then the charge source, Bitnami slash and an x. If we check the internet 001 service, we will see that our change has been reflected. Notice that we can also upgrade our installation by passing in a values file, the same way we did when installing a new chart innocence. Please refer to the previous lecture for instructions about when to use the values file and when to use the command line flags and the difference between both approaches. So in this example, we upgraded the chart installation by changing values, but we can easily change the chart burden as well. Helm, upgrade, Enter and x 0001 hyphen, hyphen virgin 9.3.5 Vietnamese slash enter. Next, you can double-check that the chart has been upgraded by running a Helm List. As you can see, the Internet 001 installation is now using an older version of Android X. Since we used an older chart Gruden, It's always a good practice to supply the values file or the command line parameters. When I'm grading a chart like this. If you don't supply your modified values, your risk that may revert to the default values, which can cause unexpected results. Alternatively, you can also use the hyphen, hyphen reuse values command line flag. This will ensure that all your modified values are retained from an installation version to the other. Finally, we can uninstall a chart installation by using the uninstall command as false. Helm install Engine X 01. Notice that the command only needs the chart name and it does not require a chart source. Let's ensure that all the internet 001 resources are removed. The same thing can be applied to enter and x 03. But what about n and x 0 to, as mentioned before, Helm is namespace bound. So when we need to uninstall Engine X 002, we must supply the namespace as false. Helm uninstall, Enter next 002, hyphen n, the fault. Now if we check the existing home installations in all the namespaces, we find none. That brings us to the end of this lecture. See you in the next one. 8. Exploring how Helm runs through the dry-run flag: Hello everyone. In the previous section we spend some time playing with helm. I guess you've seen how easy it is to deploy a complete application on Kubernetes with just one command using Helm. In this section, we dive a little deeper into hell. And we discussed the following. Desk, a Helm deployment without actually affecting the cluster, aka a dry run. Inspect a given helm release for its values. Visit helm release history. And finally, we'll explore some advanced techniques and installing and upgrading Helm charts. So let's start with the first lecture in this section and learn about the dry run commands before delving into dusting a helm release, Let's first get to know what happens when you run helm install or upgrade commands. First, helm searches for the chart that you'll want to install and fetches it from the local disk or from the remote repository to your laptop. Then it parses the templates of the chart plus any values in the values dot YAML file plus any values your past on the command line through the set flag. The result is a set of definition files that Kubernetes understands, a deployment, a service, and ingress, and so on. Finally, those definition files are sent to the Kubernetes API server, the same way you use Cube CTL and we have an application running or upgraded on Kubernetes. Usually you will need to know the content of those definition files before sending them over to Kubernetes. This is when a dry run command comes into play. If you pass the dry run flag to a helm install command, hel will not apply a generated definitions file to Kubernetes. Instead, it will dump their contents to the screen where you can see the product of mixing the values with the templates. Let's have an example. We'll represent one of the commands that we use in the previous lectures. In this command, we use a mix of values from the values file and from the command line parameters. Now, if we want to examine what would help generate alibi our values before actually applying them to the cluster. All what we have to do is add dry run that command. If we run this command, we see that we have a lot of content generated on the screen. As you can see, it contains the definition files that will be applied to Kubernetes. If we actually run this command. Our changes were targeted specifically at the service manifest. So let's scroll till we find it. Filename is as VC dot YAML. File names don't matter a lot here, we can clearly see that the service is of type NodePort and the port is set to 8080. This way we have a clear idea about tower Helm is going to apply the chart to the cluster. And that was all for this lecture. See you in the next one. 9. Inspecting a Helm release: Hello everyone. In this lecture we'll discuss how we can get information about a hell release. First, let's redeploy our antiemetics released to the cluster. We already know that we can use the list command to determine the Helm releases installed on our system. Together with divergence. You can see that we have Andrew next deploy to the cluster. But how can we access it? If you're not as most Helm charts offer a health message that gets displayed once the chart is successfully deployed. For example, the health message is giving us the required commands for determined the NodePort that enter next is using. But what if we started a new shell where this helped x is no longer available and want to know how to connect to our server. This is where the helm GET command comes handy. It accepts a number of subcommands that will let you get almost everything you want from a helm release. For example, the help message is stored in a part of the chart called notes. So we typed notes as a sub-command and the name of the release. Here we go, we have our health message displayed again. Another important requirement is to determine which values were used for this particular release. You can use the value sub-command for this purpose. And we had the values used for this specific release. But notice that this sub-command only shows the value supplied by the user. It does not provide that if all values that were used by distort, sometimes it's useful to know those as well. For that we pass the hyphen, hyphen o flag. Now we have all the values at discharged used, including the values that we supplied. Finally, we may want to see which definition files were used to deploy that helm release to our cluster. This can be determined by using the get manifesto document. Notice the difference between this command and the dry run flag that we used in the previous lecture. That dry run flag shows you what I would do if you run this command with the supplied values. While they get manifest schematic shows you the manifests after they were applied to the cluster. If an application that was deployed through a hell release is not behaving as it should. The first step in debugging would be to see which manifests Helm has used to deploy the release using the helm get manifest commands. And that brings us to the end of this lecture. See you in the next one. 10. Helm history and rollback: Hello everyone. Sometimes you may want to know what happened to a specific helm release. You understand why you may need this. Let's do the following. Given that we have the next release from the previous lecture. Remember, we said the service type denote port for this release. I am going to upgraded while intentionally placing an invalid value for the service type. Let's use a dry run flag that we learned from the previous lecture to determine what exactly would be applied to Kubernetes as a result of this upgrade? If we take a look at the service manifest, we can clearly see that the service type is incorrect. Let's see what happens if we try to apply the command to Kubernetes. As expected, kubernetes rejected the patch process and Helm is relaying the enter message to us. There is no service type named full bar. Now, if we want Helm List which see that the release status is failed, it's worth noting the various statuses they're given release may have. The first is spending installed. It means that Helm has generated the manifests, but they were not sent to Kubernetes yet. You can see this status when we use a dry run plaque. The next one is deployed, which is the status on the release. Once Kubernetes has accepted the manifests and started deploying the application. Notice that deployed does not mean that the application is already running. As sometimes criminalize spends some time deploying the application, like when it pulls images from the registry. The third one is pending upgrade. Like bending install. This status means that the upgrade manifests are generated but they were not yet sent to Kubernetes. That elite status changes to superseded when they release is upgraded and thus was superseded by a newer release religion. And in rollback like bending install and pending upgrade is when Helm has done generating the manifests for rolling back to a previous release, but haven't sent them anapest yet to Kubernetes. We'll examine helm roll back shortly in this lecture. When you anneal Stoller release, its status immediately changed to uninstalling while the process is ongoing. Wants to release is completely uninstalled, the status is changed to uninstalled. Notice that this is only visible if we are retaining history during the uninstall process. Retaining history is discussed later in this lecture. Finally, that elite status switches to failed when communities rejects the manifests that Helm generated during any operation. So the current status of our lease is failed because Kubernetes does not have a service of type foobar. But assuming that one of your colleagues did this upgrade and the standards of the release is now failed and you are tasked with determining why this video happened and what the most recent diploid version of the chart was. We can examine the previous religions of this release by using the helm history command. As you can see, the most recent deployed religion of this release is one revision to failed because the service type was invalid. Since this release, a great failed. We have one of two options, either to do another upgrade with the correct service type or to roll back to the most recent working religion. In this example, we only have one invalid value that needs to be changed it so we can just make another upgrade with the correct value. But in some cases, there are many parameters that may have wrong values. So it's safest to just roll back the rollback. We run the helm, rollback, Italy's name, then the revision that we need to roll back to. If b1 helm history again, we see that we have a new religion added to the list with a newer version number. Rebellion do remains failed and revision one is marked as superseded since it is now replaced by revision three of this release. Now let's assume that we want to uninstall this release. The uninstall command by default deletes did Elise history. But sometimes you may want to keep a trace of your helmet releases in case you want to check something or even roll back to an uninstalled release. In this case, we can pass the Keep History flag to the uninstall command. Now if we want to film history and genetics 001, which see that the latest revision is marked as uninstalled. The rebellion number was not incremented because we haven't upgraded or roll back there, Elise. Perhaps we uninstall this released by mistake and we want to recover it. Since we already have our history preserved, we can easily roll back to this release using Helm rollback and an x 0001 three, where three is the latest revision that was uninstalled. Running home history now shows that we have in Eurovision for and the previous release was uninstalled. One final note I want to make in this lecture, when helm marks at Elis as failed, it does so only. And if Kubernetes rejects the manifests, this means that you may have one or more errors in degenerated manifests that can break the application. But since Kubernetes does not complain, helm will still marked at least as diploid, although the application itself is not working. Let's have an example. Earlier in this lecture, I use this command to intentionally make Kubernetes reject the manifest. That's because kubernetes checks the service type in the service manifest before attempting to apply it. But what if the change that I make is not checked by Cornelis? For example, I'll change the image registry to something that does not exist. If we want to film history will notice at home mark this release as diploid. However, if we actually view the bot status, will discover that they cannot be started since the image registry is incorrect. Of course, this can be corrected by rolling back to the previous release, but I just wanted you to understand when and why hell would mark a release as failed. And that brings us to the end of this lecture. See you in the next one. 12. Helm chart craft 101: Hello everyone. Throughout this course we've been using readymade charts that is charged, that were created and are maintained by other people. In this section, we'll start learning how to create out our own charts. Let's start by creating a directory to hold our charts and files. A charge is nothing but a bunch of files and directories with a specific structure and formatting that Helm understands. We'll go through those files and directories throughout this class. You can create those files manually. Isn't a text editor. Helm has a shortcut command for this, helps create the new mentioned your torque name. Let's examine what has been created for us. As you can see, we have all the files and directories that Helm needs. Actually, the helm create command creates a chart that would install Andrew Next. The reason is that genetics has lots of Kubernetes objects that can demonstrate several use cases of health. For example, we have a deployment, a service, an Ingress, a service account, and even a Horizontal Pod Autoscaler. In fact, we can start using this skeleton chart right away by running helm install. Let's get our release a name, myapp 001 and the chart name now Is my app. Notice that in this case we are referring to a chart that actually exists on the disk and is not Donald loaded from a repository. So we just specify the path to the chart directory. Let's see what was created for us through this chart. Notice that discharge implementation for installing and, and x is different than the Bitnami chart that we've been working with in the previous section. For example, this one creates a service of type cluster IP in a set of load balancer like the previous one. If we want to access engine X from the cluster IP service, we can simply run a port forward command like this. Cube CTL port forward pod name, the port that we want to use and the container port. And using our browser, we can navigate to localhost column 8080 and see the Internet access default page. The first file we examined is charged dot YAML. A chart dot YAML file that was generated for us contains several keys, not helm requires only three of them. The API version, which can be V1 or V2. The previous version of Helm, helm to could only work with V1. But Helm 3, which is the current version, can work with burdens V1 and V2. So if your charts are to be one only by Helm 3, you should stick to V2. The second mandatory key is the name which indicates the chart title. Finally, we have the Virgin, which indicates the chart version. We've been exposed to chart virgins when we wanted to change the next version in a previous lecture. Whenever you make changes to your chart, it's advised to increment this Gruden. It's required to follow semantic versioning scheme. You can have more information about semantic versioning by going December.org. But there are many more optional keys that you can add. If we go to the documentation at helmet, SHE slash docs slash topics slash charts, we can see some interesting ones, including the description, which contains an overview about what the chart is meant to do. The keywords which can help users searching for that chart. Home, which is the URL to the project speech, which might contain more information and examples about how to use the chart. Maintainers, which contains the contact details of people who are maintaining discharge, that people can contact them for help. If needed. An icon which can be a URL to an SVG or a PNG file for a product's icon. An optional yet important parameter here is the app Virgin. This indicates the burden of the application that this chart attempts to install. So if we are using this chart to deploy Andrew Next, this field would hold the version of Android X that discharge is deploying. We'll be using discharge for our own application and not enter. Next, we need to modify this chart dot YAML file. We modified a description to be a helloworld application written for Node js. The type remains application and the burden stays 0.1 since this is the charge burden. But we will change the app Virgin to be 1, 0, 0, save the file. This is the first step in creating your own chart. In the coming lectures, we will play with other files into trials like the templates to further customize discharge for our application. And that brings us to the end of this lecture. See you in the next one. 13. Helm templates primer: Hello everyone. In the previous lecture, we discussed the contents of the chart that YAML file, chart dot YAML provides information about your chart. But on its own, it cannot do any deployments or send any commands to Kubernetes. To do so, we start looking at the templates. Templates are just YAML files, but they must be located in a directory called templates. As mentioned before, helm combines the content of those templates with the values you provide, either through devalues to YAML file or through the hyphen hyphen set flag to produce valid Kubernetes manifests. But how? Let's have a look at the service dot YAML file, which is a template responsible for generating the service YAML manifest. As you can see, it looks exactly like any community service manifest with some subtle differences. In several places we can see content between two pairs of curly braces. In Helm, anything but we double curly braces is considered code that needs to be executed. Anything outside of them is rendered as is without any changes. This is the first note we should remember when writing a helm template. So ABI, burden, kind and metadata will not be touched by Helm. They would be output to the resulting manifest without any changes. And port of the service is brought from the values file. We use the dot two separate parent and trial values. The dot at the start of values is called root, since it is a grandparent of all variables in the chart. But have a look at the line where the name of the service is defined. In the previous lecture, we mentioned that hell uses their alleles name to name the Kubernetes objects that it creates. This line defines how this operation happens. Helm uses a function called include. The include function embeds another template in place. That is, when this template is rendered, this line is replaced by the contents of a template called myapp dot fullName. Like in other programming languages, helm functions accept parameters. Those parameters are passed into the function on the same line separated by spaces. So the include functions, first parameter is the template name. Wireless. Second parameter defines the scope of variables that this template can use. We are supplying dot, which again means root. Thus we are granting this template access to all the variables in distort. Now we've just mentioned that the include function is referring to another template called myapp dot fullName. We don't have a file with that name in the templates directory, but we do have a file called helpers that TBL. It starts with an underscore to differentiate it from other template files. Helm does not render this file. It is rather used to define many templates that can be embedded in other templates. For example, it defines a template called myapp dot full name. If you don't understand the code inside this mini template, that's okay, since we'll discuss it in more detail later in this class, you may have noticed a pipe symbol. This is another important thing to remember when writing a helm template. The pipeline and Linux takes the output of whatever is on its left hand side and injects it as an input to the function on its right-hand side. So if we type hello using the print function and pass the output to the trunk function with two as its parameter. That drunk function with keep the first two characters in the word and truncates the rest. So hello becomes he. Also notice the dash that precedes the function. In some places. The dash at the start of the command removes any whitespace before the generated output. If the dash is at the end, it removes whitespace after the outputs. If you used a dash at the start of a command, you will often need to pipe the output to an indent function to make sure that the output is correctly indented to form valid YAML. For example, the labels need to be indented by four whitespaces. So we use the end indent function, which also as a new line to the start of the outputs. Let's run helm install my abs euro to dry run dot. Notice how we use dot here to refer to the current directory where the chart exists. Now compare the template with the resulting file. As you can see, Helm has processed the dynamic parts of the template to generate the content manifests. I hope by now you've got a taste of what Helm templates look like and how held processes them to generate the Kubernetes manifests. That brings us to the end of this lecture. See you in the next one. 14. Packaging our app in a Helm chart: Hello everyone. The main purpose of using Held is to package Kubernetes applications so that they can be easily installed. A graded and uncles told. Throughout this section we've used to help create demand to have a skeleton helm chart that deploys Engine X. In this lecture, we'll start customizing it to deploy our own app. But before doing this, let's examine our app. It's a very simple API written in JavaScript and capsules in a Docker container. Let's see it in action by running docker run like in PAT column 88 Aharoni slash hello NodeJS. And attack corresponds to the average and which is 1, 0, 0, 0. If we send an HTTP request to the container, it will reply with helloworld followed by its burden. Let's back it. This app in our helm chart, we've already examined the service dot YAML template in the previous lecture. So let's have a look at the deployment, specifically the container image part in grabs the repository name from the values and also the image tag. But if you pay attention to the image tag part, you will see that the value is piped to a function called the fault. But a full function takes an input and that input is empty or undefined. It will output its perimeter. For example, dot values to drink by up to the fault t. If not, values to drink is empty or undefined, the output of this line would be t. But if we do have a value for drink, it would be used instead. So the purpose of this line is to allow the user to specify the tag to be used for image. But if the user does not apply it, then the Aberdeen taken from the trauma YAML file will be used as an image tag. Actually, this makes it easy for you, the chart maintainer, to change the app burden in only one place, which is chart dot YAML. Deployment will automatically get this value and use it as an image tag. Hence, the only change we need to make for discharge to work is to change the image repository in the values file. So let's open the values file and type the image repository and name which is Eva Heredia slash hello Node JS. Save the file. Let's ensure that we don't have any running Docker containers that might be using our network ports. Additionally, Let's uninstall any leftover Helm releases. And one helm install my app 0, 1 dot. And as we mentioned in a previous lecture, the dot refers to the current directory where the charge is located. Since this is a cluster IP type of service, we will use port for where to access it. Cube, CTL port forward, BAD name, 8080, cholera in Haiti. Then from the browser we go to localhost colon 8080. And we can see the response from our customer. As you can see, hell makes it extremely easy to deploy regular applications to communities. If you were to deploy it using Cube CTL, they would have to run Cube CTL, apply hyphen app deployment dot YAML, followed by QCD. I'll apply icon episode of S dot YAML. Then use Cube CTL. Apply for all other manifests that your application might need, like a ConfigMap, Secret and Ingress a role or a cluster role and so on. But the real power of helm comes when you can customize the installation by only changing some parameters in the values dot YAML file or through the command line flags. If you notice, we didn't make any changes to the deployment or to serve as templates. This reduces the probability of human errors. Now that we have our application deployed, how about sharing it with other team members or with a community? As you already know by now, Helm is just a directory with a well-defined structure. So if you want to share your chart with a colleague, you could just come press a short directory contents using tar or SIP or any other similar command and put it on a shared network drive where it can be accessed. But this brings an issue. What if you produce a new version of your Helm chart? Now, users of your chart must ensure that they have two separate directories for two different town burdens of yours. And they will have to ensure that each directory clearly identifies which virgin this app is. This will quickly create a mess. Fortunately, Helm has a solution for this. By running Helm package, then the path to the chart directory, it will automatically create an archive for the chart with divergent appended to it. Notice the presence of a hidden file called dot helm ignore. This file is already created for us by the helm create demand. It serves the same purpose as the dot gitignore file. Any files or directories references in this file will not be included in the Helm package. This is particularly useful when your version control your chores so that the git directory is not bundled with your final package. Once you have this archive with this naming convention, you can just share it with your colleagues, put it on a web server or an S3 bucket. And it can be used as is, users do not need to uncompress it. Let's see, helm install my app 0, 2, then the path to the archive, just a path without having to uncompress it, without having to put it in a directory, just use the archive name. And we have a new release for our talk with a new pod and a new service. In a future lecture, we'll discuss more advanced ways to package and distribute Helm charts. So that brings us to the end of this lecture. See you in the next one. 15. Helm templates playground: Hello everyone. I guess you already know by now that you can inject parameters to a helm chart using the values file or the set command line flag. But that is not the only source of parameters that are available to you when you deploy your Helm charts release, Starting with the dot or the root scope as it is called, you'll have access to useful data about the chart. For example, the dot chart collection gives you access to the perimeter is stored in the chart dot YAML file. Just make sure that you capitalize the first letter of the perimeter when you want to use it in one of the templates. The main reason for this is that Helm is written in the Go programming language and it uses the go templating engine for its templates. If you are familiar with go, you all feel right at home. But if you're not a programmer, don't worry, you don't need to like background for developing home templates. You have also access to some other public properties like dot release, dot capabilities and others. You can find a full list in Helm built in objects, documentation. But what if you want to play with those parameters and see you what they provide? Let me share a little trick with you. If you're not as a home chart may contain a file called nodes that text. The contents of this file are rendered by helm the same way it renders other templates and the output is printed to the user after the chart is deployed. The purpose of this file is to provide some health messages or instructions to the chart users as to how to access the application once it is deployed. For example, how to get the service board, how to get the pod name, and so on. The good thing is that since Helms does not regard this file as a template, it will not use it to generate a fully functional Kubernetes manifests file. So it will not validate it against the Kubernetes API. And like it does with other files like the service or to deployment. The bottom line is, you can write whatever helm template code here and it will be just rendered for you. Combined with a dry run flag, you can have an ad hoc playground for testing arbitrary helm template functions and statements. Let's try it. So as not to mess with our MyApp chart, let's create another chart called playground. Inside destroyed organ. The notes dot text and remove its contents. Now let's start examining the dot chart and not release parameters that we just learned about. The name of this chart is dot dot, dot name. It's virgin is dot-dot-dot Virgin. And the bundled application virgin is dot-dot-dot, dot-dot-dot, virgin. Save the file, run helm install, test. The name of the release is of no importance here. You can call it anything you want, then dry run and not Helen is showing you the coronary is manifests that it created from the templates, but we're not interested in those, were rather focusing on the notes part of the output which comes at the end. As you can see, we know what the dot chart, dot name, dot-dot-dot, virgin and charged with ADD burden are substituted for. Let's extend this to the dot release parameters. Set. Open the file again and add the following. They released him is released dot name and it will be installed to the release dot namespace. Namespace. Is this a new installation? Dot release dot is installed, which is a Boolean value. So it will evaluate to true or false depending on the type of operation you are doing. Let's add another one for the upgrade. Is this an upgrade dot release dot is a great. Later in this class, we will learn about the if conditional by which you can make different decisions depending on whether this was a new installation or an upgrade. Saved the file, run the dry run command again. And we have our output. This technique will work with any helm code that you would normally put in a template. It is very useful when you need to know why a template is failing or what the generated output looks like. Let's move on to another built-in objects and helm, the capabilities like they don't release, the capabilities are not read from a file, but rather provided by helm at runtime. The DOT capability set that can be used when you want to know some information about the target environment. Virgin, for example, the version of Kubernetes is used. Let's see, open denotes the text again and add the following. The version of Kubernetes running on this cluster is dot capabilities dot QP, virgin. Save the file and run the command again. And here we go. Throughout the rest of this section, we will use this technique to show you the results of various helm template functions and statements before applying them to the template to make it easier to understand. And that brings us to the end of this lecture. See you in the next one. 16. Helm template functions: Hello everyone. We now know the basics of how health works and how the templates look like. It's time now to learn more about the different functions that help provide to make our life easier as helm template developers. So let's get started. In the previous couple of lectures, we've already been exposed to the default and the indent functions. As we've already mentioned, held muses to go template engine for processing gets templates. However, to make it even easier for you to develop templates, help also includes the sprig library. Spring is a project that you can find at masterminds dot github dot au slash sprig. The purpose of this library is to provide the Go template engine with functions that can be found in other commonly used web programming languages like JavaScript. So if you're using goal to develop a template, you can easily just import this library and start using over 100 functions that it provides. The unindent function is an example of the extra functions That's pre provides. Let's discuss an interesting function that know so many people understand when they first see it. The two YAML function. The two YAML function, as the name suggests, converts the input that's passed to it to valid YAML. Let's see how this function is implemented in action. If we have a look at the container resources, we'll see that the resources are taken from the values file. Then best today to Yammer function, the final output is five to the indent function to add 12 whitespaces to the resulting texts. So let's break this one by 1. First, let's see what's defined in the values file under the resources parameter increment eddies, you can define the resources that the container is allowed to use. The limits define the maximum CPU and memory that this container is allowed to consume on the node. While the requests define the minimum resources that this container needs in order to operate properly. The default chart generated by the Helen create command has those values commented out by default. So let's uncomment them. From the last lecture, we know that we can use the nodes dot txt file as our playground for testing template functions. So let's go to the notes dot txt, remove its content and add the following resources. Colon new line. Then between double curly braces, dot values, dot resources, saved the file. Run helm install, test, hyphen, hyphen dry run dot. Now have a look at the output. Most people expect that we'd had the resources correctly formatted in Yammer with the correct indentation. But what we have here is totally different life. The reason is that when the resources part of the values file actually win, any value is passed to help. It is processed by go. And go knows nothing about yellow. It only understands its own variable types. Map is one of those we're not learning go here. And I don't expect you to know what a map is. What I need you to understand is that whatever value passed to a Helm is processed by the engine first. And if it is a compound value like this one, It's never rendered natively as YAML. Hence, we need a function that does this for us. This is where the Toyama function becomes handy. Open the notes to a text file again and add the two YAML function at the store. Execute the dry run command again. We have the output in YAML, but a new issue arrived. The limits and requests are children of resources. So this whole block should be indented by two whitespaces. Well, that's easy. Open the notes to a text file again and add to whitespaces before the code so that the block is correctly indented. Save the file and run the dry run command. And that's not what we expected. Only the first line of the block was indented, but the rest is not. So adding two whitespaces before to gold, it may be good for readability, but it will not correctly add the required indentation to the resulting YAML. Helm provides two functions to our coulomb. This problem, the indent in the end, indent functions. Let's start with the indent function and see what we get. So we pipe the output of lazy. We are all function to the indent function and best T2 as the number of whitespace characters that we need for indentation, saved a vial and wonder dry run command again. So we have a YAML block correctly indented except for the first line. That's because the first line is always affected by whatever comes before it in the document. So in our case, we manually added to whitespaces before to a good start, so they are reflected here. Then the indent function added another two whitespaces. For the first line has two extra whitespaces. This is invalid the ammo. To overcome the problem of extra whitespaces from the template, messing with the code output, helm provides the dash character by adding dash immediately after the starting curly braces, we are instructing held to disregard any whitespace characters that come before it goes. Notice that the dash must be attached to the curly brace and followed by a space like this. Now, let's run the command again. Interestingly, the first line of the block is correctly indented. But we have one final issue. We had mentioned at the dash removes all whitespaces that precede the dynamic parts of the template. But whitespaces include the new line character. As a result of the block starts on the same line as experience and not from the new line as it should. To work around us, we use the indent function. Like the Indian function. It ends the required whitespaces before the YAML block, but it also adds a new line, the final output, save the file and run the dry run command. Finally, we have our yellow block taken from the values file and correctly formatted to form valid YAML. Going back to the deployment template, we can see that the end indent function is used the same way we did, but it is intending the YAML block by 12 whitespaces. The reason is that if we look at the deployment object, we will see that we have the spec part. Underneath it. We have the deployment template that's due whitespace indentations. Inside the template, we have the spec section. That's another do indentations. Spec has the containers list or array. That's an additional two whitespaces. Then the container item inside another device basis, the resources of this container or identity. And finally, the content of the resources block, which defines the CPU and memory requests and limits. If we add all those white spaces, we have 12 indentation characters to reach the requests and limits block that are defined in the values file. Hence, the end indent function of spes 12 as an argument. I know that this lecture is long enough, so I'll make one final note before finishing gets. So we know now that the two YAML function accepts the YAML content from the values file. And properly for maps it valid YAML that can be sent to colonize. But we haven't function input can also come from the command line using the set flag, and it will work equally well. For example, helm install tests, hyphen, hyphen, dry run hyphen, hyphen set resources that limits dot CPU equals 120 m. As you can see, the updated CPU value as fast to the two Yammer function and correctly placed in the resulting YAML. That brings us to the end of this lecture. See you in the next one. 17. The .Files Helm method: Hello of ruin. All applications need configuration. Our app is no exception. For example, it needs to know which network port it should listen to, whether to use SSL or not, and so on. The increment ids, there are two ways of injecting configuration parameters to the pod. The first is through the ConfigMap for nonsensitive data. And the second is the secret for sensitive data like passwords and API keys, for example. Before giving deeper into this topic, let's first review some of the changes that were done to our application. Now, my app does not return a Hello World in response to an HTTP request. Instead, it displays the current weather conditions for the city that the user can provide through the URL. To do this, it can tax a public weather API to get this data. Our application needs to authenticate yourself first to be able to get the required info. Let's see first how this can be accomplished through Docker. Docker run hyphae BAD AT column AT hyphen D. Then we need the best in the API key as an environment variable. So hyphae, API key in all caps equals our API key. And we also need to mount the configuration file as a bind bound. So dollar sign PWD for the current working directory slash config slash default, but JSON column slash apps slash config slash default dot JSON. Then the name of our image, Eva Iranian slash hello NodeJS column 2 000, 000. Notice that we updated the burden of our app since we added more functionality. Let's make sure that our app is running. Now let's see what we have. Navigate to localhost 8080 slash Amsterdam. And we have the current weather conditions for Amsterdam. Let's try Rome, Paris. And here we go. So for our application to provide the needed functionality on Kubernetes, we need to pass in the configuration file through a ConfigMap and the API key through a secret. The first step we need to do is change the application burden in the chart dot YAML file. As we learned before, that deployment will automatically pick the correct image tag from this value. Now we need a config map. The chart at the helm create commands generates does not include a ConfigMap by default. So let's create one. In the templates directory, we create a new file, gold ConfigMap dot YAML, and we add the following EBI virgin B1 guide is ConfigMap metadata for the name. Let's use the name template, include myapp dot full name. Then comes the data part. Here we have two ways of saving our file in the config map. The first is by just adding the file contents to the config map itself like so. Since our config file is just a few lines, we can just copy and paste its contents in the ConfigMap. But in several cases, the configuration file may contain dozens or even hundreds of lines, which makes copying and pasting it every time we have a new file, a daunting task. Helm offers the dot files group of methods. It allows us to read and insert the components of files that exist on the local file system. So in our case, let's create a file called the folder Jason in our local chart directory and insert the required content in it. We can delete the literal file contents from the data part of the ConfigMap and add the following instead. Between double curly braces, note files dot get the folder JSON pipe to indent for. Notice that the folder JSON should exist in the chart root directory and not inside the templates directory. When Helm is run the dot files.gov where we'd accountants of the default JSON file and insert its contents here. Now that we have our config map, let's define our secrets. We create a new file in the templates directory called secret dot YAML and start writing our definition. Api version v1 kind is secret metadata name. Again, we include the myapp dot full name named template. The type of the secret is APAC. Then comes the data part. We need to pass on this secret to the container as an environment variable. So we type API key column that we need to encode our API key in base-64 format. Helen has a function for the scold be 64 ink so that when double curly braces we add our API key, then pipe it to be 64 ink function. A word of caution, of course, is not to store sensitive information like API keys or passwords in plain text like this. We do this in the lab only to make things simple to understand. But a real-world scenarios, this string should be stored in a secure location like HashiCorp Vault or AWS Parameter Store or something similar. Save the file. Now that we have a config map and a secret, we need to modify our deployment to start using them. So in the deployment template, we add volume mounts. The name is config and the mountain path is slash app slash config slash the fall. But Jason, since this is the path where our app expects to find its configuration file, then we have our environment variables. Bot, name is API key, since this is the environment variable that our app expects. Value from, secret key ref, name of the secret is include, myapp dot fullname. And the key inside the secret is a BIT in all caps. Finally, we add the volumes to our deployment name is config. It's a ConfigMap. Its name is include myapp dot fooling. Save the file. One last thing before we run this chart, we need to upgrade its burden. We've already upgraded the Aborigines since we have a new image, but since we also made changes to the chart itself, we need to also upgrade its virgin, go to charter YAML and change the burden to be 0 dot dot one. And now let's see what helmet generate for us if we ran this chart. Notice how helm read our default dot JSON file and inserted its contents in the data part of the ConfigMap with the correct indentation. Thanks to the indent function. Also our API key is correctly formatted and encode it into secret dot YAML manifests. Since we are caring for them that we want to upgrade, distort. We want helm upgrade my app 0, 1 dot. If we want to help list, we'll see that we have a newer version, 40 my apps, your run release when a new truck burden and a new application Verdun. Let's ensure that our new pod is running. And when the port forward command to be able to gain access to the service from our machine. Open the browser and let's see what the weather is like currently in Madrid. Then Berlin. The unit files method can do more than just reading a file. For example, let's assume that we have a group of configuration files stored in a directory called config. And we want to include all of them in the ConfigMap. So let's create the config directory and move our default JSON file here, since it is one of our config files. And let's create five JSON files by using a simple for loop. We add some JSON content to each file to identify it name and the file name in JSON format. Now, assuming that we want to add those six files to the ConfigMap automatically. First, we remove the default JSON part. Then between double curly braces we add dot files, dot glob. Then we specify the location of the piles that we want to include in our config map between double-quotes, config slash asterisk, and the output is piped to the indent function. As usual. If we have different types of file extensions, we could use config slash asterisk, the JSON, or config slash asterisk dot, and so on. But this code will not work. Let's see why. Run helm upgrade, hyphen, hyphen dry, run my app 0, 1 dot. Have a look at this adder. Expected string got Engine dot files. The problem is that dots pile slope glob does not read the files in the directory that you pass to it. Instead, it returns a file object. It's up to you to decide how you want to use this files object. We'll see why in a moment. For now, let's use the S config method to read the contents of the files in the files object and add them to the config map. The rounded brackets here are used to indicate that the config method will be applied to the output of the globe method. Now let's run the dry run command again. That a config map part. You can see that we have our files YAML formatted correctly and placed into ConfigMap. Now let's understand why the glob method returns a file object instead of reading the file's contents one-by-one, like the getMethod did. Open the secret dot YAML template. And let's assume that we made those files as secretes. Again, we use the glove method, but this time we change it to another method as secretes. The S secrets method will read the contents of each file like the S config did, but it will convert them to a format that is suitable for being included in a secret. And it will encode them in base 64 format automatically. Let's see, run the helm dry run command. And as you can see, the files have been correctly placed inside the secret data part, and they've also been base-64 encoded. If you want to double-check, let's copy the base-64 string of 55 and one of the guests base 64. And we have the JSON content of the pile. So in this lecture we learned how we can use the dot files method in Helm to automate inserting file contents in config maps and secrets. That brings us to the end of this lecture. See you in the next one. 18. Helm flow control: Hello everyone. You make your Helm charts highly customizable and environment agnostic. You need the ability to make conditional decisions. For example, will the charge create a service account as part of the deployment or not? Whatever. Look at the service account template in our chart and see how it works. The first line uses the if condition statement to check whether it is service account dot create value is set to true. If it is true, this means that the user wants to chart to create a service account as part of the deployment process. But if it is set to false, then this code block is ignored. Since the if statement comes at the very start of the pile, then the whole template will not be included in the generated manifests. Let's have a look at the values file and see what the service account dot create parameter is set to. Let's use a dry run command and verify that we have the service account radicals in the output. And indeed we do have it. Now let's set the create value to false and run the command again. Now it's gone. So all the code that comes after this statement will be executed all the way till it hits the end statement. So let's recap. The if statement checks whether the condition that follows it evaluates to true or false. If the output of the if statement is true, then all what comes after it will be executed by help or false, the block Apollo that is disregarded. The if block must have an end statement that indicates the execution boundary. Notice that we have the dash character before and after the if statement and before they end statement. As mentioned before, the dash removes any whitespace, including the newline character. If it comes at the starter code, any whitespaces proceeding the generated output will be removed. While if it comes at the end of the code, any whitespace after the generated output will be removed. In this case, the if condition line itself will not produce any output, but there might be a new line before the IV line. We need to make sure that the manifest starts exactly at the top of the pile. Additionally, there will be a new line in place of this if statement line. In the end statement, we are only interested in removing the newline that will be generated in place of the endline. But we don't want to remove the newline that may come after the end statement, as we may still want to add some content to the templates. If conditions are used to examine whether a value is true or false. But it can also be used to evaluate more complex scenarios. Let's open the notes dot txt file and remove any content there. Assuming that we want the dust, not whether a perimeter is true or false, but whether this parameter has a specific value. So let's have a look at the values dot YAML file. And let's check whether the service type is cluster IP. So we use the if condition as follows. In other programming languages, you might write this expression as if not values that service dot type equals, equals cluster IP. But in home templates, the equality is done through a function called EQ. And as we already know by now, helm function state or parameters on the same line separated by spaces. So the line becomes if EQ dot values not service dot dtype Gloucester I-beam. It might look a little strange at first, but if you bear in mind that the EQ is just a function rather than an operator. You will get the idea. Now let's decide what we will have if the service type is cluster IP. That's right. The service type is cluster IP. And the block run that y right command. And as you can see, we have our phrase printed. The if conditional also accepts the else and else if statements to make even more complex decisions. For example, else-if dot values of service, that type is NodePort, then the service is of type NodePort else, if everything else fails and a service type is probably a load balancer. Let's examine a few other equality and logic functions. For example, n is not equal to run the dry run command. And we no longer had our phrase since the if statement is evaluated to false. There are other useful functions that cover other use cases, like GT for greater than, LTE, for less than, GE for a greater than or equal, LE for less than or equal and, or not, and fail the end and, or functions or logical countries. But sometimes they may get tricky. Let's have an example. Suppose that you want to check if the service type is cluster IP and if ingress is enabled. We can use the AND function as follows. Eq dot values that service to type cluster IB. The values that Ingress not enabled. But this will not work because we need to evaluate the output or the EQ function first before passing that value to the AND function as its first argument. So to make this work as we require, we place a pair of parentheses around the EQ function. Now because ingress is not enabled by default in the values file, the output of this AND function will be false and a phrase will not get printed. The same concept holds for the OR function. Now the phrase will be printed since the OR function will return true if either of its arguments is true. Here we have the failed function. This is particularly useful when you need to ensure that the user has supplied some value or has done some required action. Let's have an example. Assuming that you are trying needs about the YouTube redefined gold important and the lower one, chart execution to continue if the user did not supply this value. So we write our if statement as follows. If not dealt values, not important. Here we are using the NOT function which returns true if its argument is false, empty or undefined values are regarded as false. Then we use the fail function with a friendly message to the user indicating why we stopped short execution. Please provide the important value. And we close the block with the n statement. Let's execute this chart. And because we don't have a value and our values file called important, that chart execution is aborted in the message is output to the user. Perhaps a more common usage for a fail function is to ensure that the cluster is running a supported Kubernetes version. For example, if less than dot capabilities, dot cube burden, minor 23, fail. Discharge requires Kubernetes version 1.13 or higher. We've previously discussed d dot capabilities built in variable EQ burden provides a major, a minor version numbers to make it easy to compare. So since cryptanalysis major burden is always one, we are examining the minor burden and we are requiring that the target cluster should be 1.23 or higher. However, capabilities of cube burden of minor is a string value, so it cannot be passed to the LD functions since it expects a number. So we need to convert this value to an integer through the int function and use burden thesis Tabasco resulting output to the LTI function. Now if you want a dry run command and since coordinate is version 1.200 feet was not out yet at the time of this recording, detroit execution fails. And that brings us to the end of this lecture. See you in the next one. 19. Helm looping with "range": Hello everyone. More often than not, you need to supply a number of values to a helm chart and you want to loop over them. Let's have an example. In our service template. We are instructing the service to listen on port 80 and forward traffic to the container sport AT. But what if we want the service to also listen for HTTPS connections or port 443. One way of doing this is by simply adding another item that the ports list and taking the required parameters from the values file. So let's do that. First we define our values and the values file SSL underscore port 443 as, as L target port 443, SSL name HTTPS. Then in the service dot YAML file, we add our new port item as follows. Or between double curly braces or usual, they'll values the service dot SSL underscore port, then the target port, again between curly braces, but values service, but SSL underscore target, underscore port, then the name of values that service, but SSL underscore name. But there is a couple of drawbacks to this approach. First, that whenever we need a new airport, We need to uniquely identify it in their values file. So 40 SSL port, we prefix at all the values with SSL. Now if we want our application to also listen on port 8080 or admins can login. We will need to create a new bunch of values, prefix it with Admin, underscore something. Second, we will need to manually add a new item to the course list into service template, reading the required parameters from the values file. A more elegant way of addressing the multi-port requirement is to use a list and a values file and Luke over it into service templates. Let's see. In the values file we create a list of ports. So our list is called ports and it contains an item with a number 80, target port 80, and name HTTP. The second item has a port number 443, target port 443 and name HTTPS. Notice that we can easily reuse the same names so you no longer need to prefix are parameters to make them unique. And disservice template we remove the old content and add the following range. The values of service, the ports. Range is a helm function that is used to loop over lists and dictionaries. By the way, a dictionary and go is called a map. We pass our ports list as an argument and arrange function. Now we add only one port item. Number is number. Notice that our scope change it. So instead of typing dot values that service, but boards that number we just use dot. The dot here refers to anything that he's a child of. The port item protocol is TCP. This doesn't change the target port. Again, we just used a dot since this is a child or the port item. And finally the name dot name. Now if we wanted a dry run commands and have a look at the generative service manifest. We can see that the range function looped over the port list and created the required force with names and numbers. Now, let's say that our user wanted to add two more ports, 140 administrative access and another for operations. So we add a new item in the ports list with number 8080, target port 8080, and name admin. And a fourth item with number 8, 0, 0, 8, target port 800 eight and name ups. And this is the only change that we need to make. So then our service listens on the four course that we've defined. Let's double-check by running the dry right demand. And as you can see, degenerate a service manifest contains all our boards. We needed make any changes to the service manifest only to the supplied values. And this is the recommended way of working with Helm charts. Never touched the templates and make all your customizations in the values file or through the command line arguments. Speaking of which, what if we wanted to change the port number of the first port through the hyphen. Hyphen set command lines back. Let's see. Helm install my app. Dot hyphen, hyphen dry run hyphen, hyphen set. You need to supply the path to devalue, you need to change. So service reports then between square brackets 0. Remember, serve as the force is an array, so we use the index to access a particular item. In our case, we need to modify the first item in the list so it's index is 0. Then to access a child item, we use the dot notation as usual. So we need to change the number to be equal to 8 thousand. And let's run a dry run commands. Well, if you're expecting that we'd have our four course as before with the first one listening on port 8000. I'm sorry to disappoint you because this is not how health works. In previous lecture, we mentioned that the hyphen, hyphen set command line flag overwrites any values specified in the values like. So, since we referred to the service of force array, we overrode, disservice to force list in the values file, effectively replacing it with another array that contains one item with the port number only defined. The rest of the values are empty as you can see. So if you really want to change the port number 3, the demand line, we need to supply the entire array again, like so. Now if we run the dry run command, we'll see that we have our desired outputs. Obviously, this method is more geared towards being placed in a shell script or a program where those values are taken from the user or from another program and automatically place in the command that runs held. So the range function allows you to loop over all the items of a list or a dictionary and access its child values. But what if you want to refer to a specific element in the list? For example, let's assume that we want to add an annotation to the service that has the port number of the first four defined in a service, the annotation would be called main port. So we need to know what the port number of only the first board item in the list. To access a specific item in a list, we use the index function. It takes two arguments. The first is the list itself, and a second is the index number of the item. Since we are interested in getting the details. So the first item, we best 0. So far we are getting the entire contents of the first item in the ports, which includes the number, the name at the target board. But we are only interested in the port number, so we need to use the dot notation. We surround the entire function in parens thesis and abduct number. Finally, we put the whole line between double quotes to follow yellow rules. And if we run the dry run command, we see that we'd have the correct port number added to the invitation. Now we've used the values that service the force in more than one location. If we were to change the name of the value, for example, to be the values that service AT networks, you'll have to make this change in all the places that reference this value. We can assign the values to serve as the force to a variable and use that instead. So at the top of the file, add the following line between double curly braces and between dashes, donor sign boards, colon equal null values. But service, of course, in Helm, variables start with a dollar sign. When we want to assign a value to the variable for the first time, we used a colon equal sign. So now we have our variable created. We can replace that a value is that service dot wars with dollar sign pores in all the places where it was mentioned. Let's double-check that the chart still works as expected by running the dry run command. So the range function loops through the list and the index function allows us to access an individual item outside the loop. But what if you want to access the individual item while we are looping? Let's assume that due to some information security recommendation, we are not allowed to use port 8080 for the admin portal. So we need to examine each board in the loop. And if we have port 8080 assigned to admin, which stock chart execution, we start by making a little change but arrange loop. So define a variable called key and another called value and assign both do the list. Once we do that, we immediately have access to the index of the item in the loop through the key variable and its value through devalue variable. We start our if condition as follows. If EQ dollar sign value that number 8080. So if we had a port number set to ADHD, execute whatever comes in the if statement buddy. We need to convert 8080 to an integer to be able to compare it correctly through a pipe, the dollar sign value that number to the int function and surround the statement with a pair of parenthesis. We also need to check not only at the port number is set to ADHD, but also if the user wants to use this port for admin access. So we add a second condition to the if statement, which is if dollar sign value dot name is admin that we impair a thesis. We tie both conditions with the end function so that it can be read as follows, while looping through a port list supplied by the user. If you come across a port with the number 8080 assigned to the admin port name. Then. And let's define what happens when the if condition evaluates to true. So fail. Please supply at it from port number for admin access other than 8080. And end of luck. Let's run the command again. And sure enough we have our error message that brings us to the end of this lecture. See you in the next one. 20. The helper.tpl file and named templates: Hello everyone. In a previous lecture, we mentioned the underscore helpers, the TBL file. And we said that it contains many templates that can be called in other templates. In this lecture, we'll discuss the use of many templates, more commonly known as named templates, and learn how we can make use of them. A name template is not rendered by Helm, so we cannot use a name templates, for example, to define a deployment or an Ingress resource. Name templates are used to encapsulate complex logic that is needed in multiple places throughout the chart. Think of them as global functions that can be called anywhere in the chart. Let's start by examining the name templates that were already generated for us by the helm create demand. The myapp dot name here is the title of the name template. The template name is prefixed by the chart name, but this is not a hard requirement. However, if our trough also includes some charts which also contain their own name templates, naming collisions may occur and you won't know which template you are calling. So a good practice is to prefix the name, template name with the chart name. Self charts will be discussed later in this class. You start a name template block by using the defined keyword and close it with the end keyword. The myapp dot name is used to override the chart name if the user wants to. We have a look at the code. We'll see that it uses the default function to check whether the user has supplied a value of what a chart name through the name over y parameter. If that value is non-empty, then it would be used instead of the chart name whenever the myapp define template is called. Let's have an example. In the notes that text file Let's add between curly braces, include myapp dot name dot. Before we execute the dry run command, we need to remove the if condition from the last lecture, so short execution is not aborted. Now execute the dry run command. As you can see, we have the chart name printed. But if we said the name override or something else in 1D command again, we see that it has the modified value. Let's discuss the next name template. This one is slightly more complex. It is used to generate the fully qualified name for the chart. A fully qualified name is used to name the different Kubernetes resources in a way that prevents any naming collisions. Let's see. The name template starts by examining whether the user wishes to override the chart. Schooling. In that case, it is up to the user to select a unique name. Let's go to our notes dot txt file and change the name of the name template that we call to be myapp dot fullName. Run the DRI recommends applying a name, the full name overwrite value. As you can see, we have our name printed. But if we also check the names of the generated manifests, you'll see that the name that we've supplied is used to name the resources. If we have a look at the code block again, we'd see that the name template provides more options for you as to how you want to name your chart in a fully qualified way. So if the user did not provide the full name override value, then let's check if there is a name override value provided. Otherwise, use the chart name. Assign the result to a variable called dollar sign name so that we can use it further down the block. I'll be referring to the dollar sign name from now on. Remember it stores either the chart name or the name override value. If the user has applied it. If that name is contained in their release name, then we can just use their last name as a fully qualified name for the chart. The contains function comes from the sprig library that we've mentioned earlier in this class. It works as follows. If the first argument string is included in the second argument, the function returns true. For example, if the release Lean is my 001 and the dollar sign name is my app, then the function returns true since my app is a subset of my app, 0, 1. Let's run a dry run command again and see the result. As you can see the name template use the release lame as charts fully qualified name, and it was used to name the generated manifests. But what if the dollar sign name was not part of their alleles name? In this case, the if block falls back to the last action, which is their last name, followed by a hyphen, then the dollar sign name variable. Notice that in all cases, the resulting name must be truncated to the first 63 characters and any hyphens into suffix or removed. This is required since some criminality resources enforced this requirement, this also follows DNS naming rules. So let's choose a release name that is totally different than D dollar sign name variable. And as you can see, we have the dollar sign name and their last name separated by a hyphen. To further stress on this concept, let's assume that the user supplied the name override value and it was a subset of the release name. And as we can see the name templates use the release name as the poly qualified name for a chart since it contains the dollar sign name variable. Another name template is called myapp dot chart, which is used to return the name of the chart and it's merging. Notice the use of the Replace function here to replace any characters that might be in the virgin with underscores. Also the truncate and trim suffix functions to give the output at 63 characters long. And remove any trading hyphens. Next, gums to more important name templates that are used in many places in degenerated manifests. The first one is the labels, and the second one is the selector labels. Labels are just a way to add some metadata to the resource about which helm chart owns it and a service that was used to generate the manifest, which is always held. But notice the use of the include function in a couple of places inside the label's name template. It is used to inject another name template in place, which is the selected labels. Hello makes a distinction that we labels that are used to provide some metadata about the resource. And a selector labels which are used by objects that use labels to identify their trial objects. For example, a deployment uses labels to determine which parts it should manage. A deployed a manifest contains this Electoral labels in the spec part of the deployment itself. And the same labels into Bot template so that it automatically manages any parts that it creates. The same thing holds for a service. A service needs to know which box it will route traffic to. This is done through this electrode labeled Spark. If we have a look at this Electoral labels, we'd see that hell makes use of the myapp dot name, named template, which prints the chart name or the value supplied by the user. Then it also adds another label that bears their Willie's name. So the result is a pair of labels that uniquely identify a pod in a cluster. More than one helm chart may have been deployed. Finally, we have a name template for generating the service account name. At the first glance, you may wonder why we need a name template specifically for creating a name for a service account. We didn't need one for the service, the deployment or the ingress. Well, the reason is that a service account, unlike other Kubernetes resources, needs to have a name that is known to the user. Service accounts are associated with permissions and access rights that if misused, can cause damages to the cluster and the entire business. Accordingly, it is highly recommended that service accounts are correctly identified and managed for better security and other control. Back to our template. If the user wants to create a service account, by setting the service account dot create a true, then the user has a change to also supply the service account name as a value. Otherwise, the full name named template will be used same as other Kubernetes manifests. However, if the user does not want to specifically create and manage a service account, then at a fault one will be created. In fact, criminal is create a service account for you with no permissions, and it's called the fault. But the user still has a chance to change this name from default to something else. You may be asking now, if in all cases a service account will be created by Kubernetes, what is the use of service account dot create condition. It is there to simply differentiate between service accounts that are created by criminalised, by default and the ones that they use are intentionally creates as part of the chart deployment. We're not selling any RBAC permissions here. We're just naming the service accounts. But the naming convention will make it easier for whoever uses the cluster to know the type of service account that was created. So now we've covered the sixth name templates that were created for us as part of what the helm create command generated. It's time now to add and use our own named Templates. If we have a look at the deployment template, we'd see that it uses an image pull policy from a supplied value. By default, it's set to if not present. If we check criminal is documentation, we'd see that there are three possible values for the image pull policy. If not present, which means the cubelets, what will the image only if it is not present on the node as a result of having been pulled before. Always, which means that the image would always be pulled from the ripple even if it is already present on the node. And never, which prevents document from pulling the image at all from the REPL it, the image is not already cached on the node. That container will not start. For most use cases, the if not present policy is used since it reduces the time the container needs to start by utilizing the already existing image on the node instead of pulling it again over the network. However, it has some caveats. For example, if you pull your images from a private repo that needs or allocation, then other applications in the same cluster do not need to authenticate to the registry. And they can just use your images as they like. For development environments that may be fine since it's quite common that different applications share the same Kubernetes environment for cost reduction purposes. But in production, it is recommended, as you said, the image pull policy to always so that continues always authenticate to the registry report attempting to use their images. Let's create a name template that we can use to automatically set the image pull policy to always by default unless the environment is not production. We start by explaining what this template does with a comment line. In Helm. Comments start with a slash followed by an asterisk and end with an asterisk followed by a slash. Faith the image pull policy according to the type of an environment. Then we define our template. Let's name it image, pull policy, and prefix it with the chart name as usual. Next, we define a variable dollar sign environment. It is used to hold the value or the environment if supplied by the user. If it was empty or not supplied, then we default to production. Next, we start our if condition. If the dollar sign apartment is not production, then our output is if not present. Otherwise it's always and we end our block. So with this name template, we're confident that our image pull policy is set to always by default, unless the user specifically mentions that it's a non-production environment. This way you would prevent the policy from being accidentally changed to, if not present, as the user must explicitly state the environment type to set it. Now let's move to the deployment template and the image pull policy part. We remove whatever gets supplied as a value. Replacing it with our name template output include myapp dot image, pull policy. Finally, and so that our chart is clear to the user, we removed the image full policy for the values files since it will be used and add the environment value. Now let's run our drop command. Notice that the image pull policy is now set to always if we don't set our environment. Now it's explicitly said the environment or something like that. And run the command again. And we have our image pull policy set to, if not present. That brings us to the end of this lecture and see you in the next one. 21. Helm chart dependencies: Hello everyone. Throughout the previous sections, we developed a lot of skills that enable us to use community Helm charts and customized them to fit our requirements. We also learned how to create our very own taught from scratch using the helm create command. In this section and the ones that follow, we'll start exploring real-world situations where more than banjo tell features are used. And we start with chart dependencies. So we have our application which displays the current weather conditions in the city that the user chooses. Most. What obligations add a layer of cache to improve performance. It works as follows. The first either glide requests to data, it is brought from the backend API or database. But once this data is served to the client, it is also gashed. If the same client or another one requested the same data, that it will be served from the cache, instead of having to initiate a new request to the backend service. The main advantage of using cash is to increase performance. Additionally, some APIs charged their users by the number of requests they do. So caching the data for some time and using it to serve requests reduces costs. Among the most widely used cashing services is redness. So let's install one for our application. Virgin 2.1, or application requires the presence of a red a server so that it can cache the weather data for later responses. So now we know that we weren't going to play right alongside our app. One way of doing this is to search for our repo for a registrant. Fortunately, the Bitnami rapidly that we've added to our system already includes a restaurant. Now we won't be deployed that right hip joint. One way of doing this is why running helm install, read us 001, Vietnamese lash Rennes. While this approach will work without any issues, it has some drawbacks. First, we need a way to inform our chart users that they also need to deploy, read this chart prior to deploying our apps. 1. Second after Reddit's is installed, we need a way to inform our apps chart with the values that it needs to contact radius. For example, the host name and the port. Fortunately, hell addresses this requirement through chart dependencies. In the chart dot YAML file, we can specify any chart dependencies that need to also be installed when this chart is deployed. The appendices is a list of items. Each item requires the name of the chart, Virgin and the repository where it lives. So we specify the name redness. When it comes to the chart Verdun Helm has provided a lot of flexibility to let you choose Detroit burden that fulfills your needs. Assuming that we want, let us merge at 6.2.5, which is the latest at the time of this recording. Let's run Helm search repelled, read us hyphen, hyphen virgins. We can see that red is version 6.2.5 is available in the Bitnami slash red It's charts verdant, 14.8.315.3.2. Acutally, we instruct the chart dependency to fetch a chart burden that is more than or equal to 15 is 0, but less than 16. As we've already mentioned, how news or semantic versioning scheme. So Vernon 16 would be a major version upgrade which might include breaking changes. Accordingly, we want to stay within the boundaries of burden. 15. Helen has a few shorthand symbols to make it easier for you as elected chart range of burdens. For example, have 15 dot x means anywhere between 15 and less than 16. So the half sine rounds to the latest within the major virgin reach. If you want to constrain the Virgin reached the a minor version. Instead, we can use the Tilda sign. For example, tilda 15.3 x means any burden more than or equal to 15.3, less than 15 for 4000. In other words, we're constraining the Virgin to be in the minor version number three. Next comes the repository. We add the URL to our Bitnami repo. Save the file. Now we want the helm dependency on late passing in the location of the chart. What this command does is that it grabs the latest version of independent chart based on their burden range that we've specified and adds it to the charts sub-directory. If you're not us, we also have a chart that lock file created. This file ensures that the specific version of the dependent chart will be used. If you've worked with JavaScript before, this is like the package, the log file. If we back our truck now, then this back dependent chart will automatically be included in the resulting archive file. And independent chores or downloaded and placed in the charts sub-directory. If for any reason to charge, directory is empty, but you have the chart in a locked file available, then Helen can use this file to rebuild the dependent charge. Let's try that. So we lead the reddest archive from the source directory. Then we went home dependency built. Now, in a few seconds we can find that we have the registrar back in its place. Now I'm used a chart, a lot file define, and now we'll load the correct version of the dependent chart before attempting to use our newest advert. And we need to make a couple of changes to the helm chart at YAML file. First, we need to upgrade the burden of the application to the one that actually uses Reddit as a caching layer. That would be burden to 0.1, as we already know by now, just number of controls to tag of the image that will be pulled from the wrapper. The second change we need to make is to also update the chart virgin sees it would be serving a new application, virgin, but it would be 0.2 since we consider this as a major change. Let's recap what we are going to do now. We are upgrading of our charge release to use the new version of our application, which makes use of a caching layer serviced by redness and make things work. We used a Bitnami read this chart as a dependent chart to this one. It would be installed as part of the chart deployment. But before attempting to upgrade our released, we need to address an important concern. What if we want to pass parameter values to the child registrar? Lets have a look at their restaurant documentation on the Artifacts hub and see the different parameters that are available to us. We are particularly interested in the auth dot enabled perimeter. Enabled by default, it instructs writers to use authentication before serving requests. This means that if you need to use this chart, we are forced to sit a password. Do we use bi readiness and to also pass this password to our applications so that they get planted gate to Rennes for production environments and password should definitely be set and protected in some safe place. Much more development of armors like this one, we can get away without using a password. To do this, we need to pass the auth dot enabled perimeter to the right of some sort and set it to false. You pass a parameter from apparent char to each child one. You simply need to prefix it with the child sharks leap. So in our example, we can pass this parameter on the command line as hyphen, hyphen set reddish dot, dot enabled equals false. Or three values file if we choose to by adding reddish, off, enabled and false. Since we need this to be a permanent value, we'll keep it into values file. Another thing we need to make before we started at reprocess East to modify how our config file is read into ConfigMap. In Lecture 16, we use the dot files that get command to read the config file and edit automatically to the ConfigMap. We need to add the captains of the file to the ConfigMaps. We can dynamically add that right, a service name to the red US host part of the configuration R configuration file now includes apart for dermatitis parameters. We need to define the host name to be dynamically added to the chart. So we add dot release, dot name followed by a dash, then read us, then dash master. This scheme is there one used by Helm to name this up chart resources? Save the file. Now we are ready to update our Elise. So we want Hill upgrade myapp dot. In a few seconds we should see our pods started. Also noticed the presence of response that are also starting. Let's test our changes by activating the port forward command on the My Apps, your HomePod. And from a browser, you can go to localhost 8080 slash Alexandria, for example. In Uda, current weather conditions in Alexandria, Egypt. If we refresh the page, we should see the same output. Obviously this is no different than the results that we've seen in the previous lectures when testing our app. But that's mainly because we are testing the app from the same machine on which it is installed. So performance boost won't be noticed. But if we are reaching this app remotely over the network, we've noticed an extremely faster response when we refresh the page, since we'd be served from the cache instead, if you want to be 100% sure that when it was used, Let's quickly open it, read a CLI session to the reddest font and one asterisk. We have slash Alexandria as the key. If we want get slash Alexandria would find that the JSON response that was fetched from the external API must cached here. So as you can see, with a very few changes in a couple of helm files, we were able to upgrade our deployment to a one that uses a caching layer and pretty much easily deployed there read a server that would handle caching. One last thing before I end this lecture, we've already seen how an apparent charge can pass values to a child drops. It can be done by simply prefixing the value name it with a soft chart name. But sometimes may want to pass a value from the child chart to the parent one. What's have a second look at the available parameters to find in there, read this chart documentation. And specifically the master configuration part. All those values are defined under a property called master. Assuming that we want our parents to have access to all the master of Venice configuration. We do the following. Go to charter yellow and interdependencies part. We add a special parameter called import values. This is a list of items. An item defines child and parent. The child value refers to the top-level property. This up charts found deal's file. In our case, the top-level property is master. We also need to provide a parent property that we can use to access the child values of that property that we've just imported. If this is a little confusing for you right now, don't worry, it will get clear when we see the example. So let's see how we can gain access to the child values that are defined under master. For example, the container port. In the notes that text file we add the bombing, but values not imported dot container port. And we want to draw your one came in. As we can see, that port number was printed as a result of our statement. Let's open the notes that text again and explain what happened here. We start with the adult values as usual. Then we need to access the container port, which is a child of the master value into sub chart. But we have imported this value from the trial chart into a value of our own gold, important. Hence, we are using the import of value, which can be considered as a shortcut to the master property value. Finally, notice that we didn't add an actual value in the value spiral of the parent gold important and didn't pass it in the hyphen, hyphen set command line flag. We don't need to do this. The value is automatically added to the chart values as soon as we imported. I hope it's clear for you now. And that brings us to the end of this lecture and see you in the next one. 22. Helm library charts: Hello everyone. In a previous lecture, we discussed the underscore helpers that TBL file. This file is not rendered by held in. It does not generate a granaries manifest. It contains name templates that can be called in other templates to provide common functionality. But sometimes this common code is so useful that it needs to be called even in other charts. Let's have an example. In our underscore helpers that TBL file, we introduced a name template that was used to automatically set the image pull policy based on the environment value. What if we need the same name function in another chart? One way of doing this is obviously copying and pasting the code from this chart for the underscore helpers, the TBL file of the new one. But at many times, the helper code is much more complex to be just copied and pasted. Helm 3 introduced library charts. E-library tried. You're simply a helm chart that is only used for supplying name templates that can be later used in other charts. In programming languages, there is a concept of shared libraries or modules, code that provides go on functionality that can be used in multiple products. A Helm Library charged serves the same concept. So in this video, let's create a library chart that will make our name template available to all chars that needed. A library of Detroit is created the same way other charts are. We use the hell on Create command to create a boilerplate chart. So far we've been working in a directory called my charts underscore my app. Inside myChar story tree, we use the helm create lib policy command to create a new skeleton toward gold lip policy. The next step is to lead all contents of the template circuitry. Additionally, we're remote or values dot YAML file as library charges do not require it either. Next, we open the chart dot YAML file and change the chart die from application, which is the default to library. Let's have a look at what we have here. A chart dot YAML file, a charge directory for any dependent charts, and a templates directory for the name templates that were react to define. Let's create a file under the templates directory where we'll define our name templates in all the charts that we've seen so far, this file is called underscore helpers that TBL. However, we are not constrained to this filename. In fact, any file that starts with an underscore and is placed in the templates directory is not rendered by Helm. Instead, it contains name templates. So let's call our file underscore image pull policy that TPCL and cut and paste the contents of the image pull policy named template from our underscore helpers for TBL file. Here. For a quick recap, this code is used to automatically set the image pull policy when it is called in the deployment template based on a value cold environment. If the environment value is empty, undefined, or cetera production, the image pull policy is set to always for security reasons. If the environment is explicitly set to another value like dev or staging the image pull policy is set to, if not present. We named the tablet lip policy, that image pull policy, save the file. Now let's switch back to my chart. And in the triangle YAML file, we need to add a new dependency item with the following details. Name lib policy, a virgin 0.1 repository. And here we have one of two options. If our library chart is hosted on our web server, then we just add the URL the same way we did with other charts. In our case, the trunk is hosted on our local file system. So we use a file notation to provide the URL to the chart. Notice that we are providing the relative path to the directory. Next, you run a helm dependency update, followed by hell dependency build dot. If we took the charts directory, we'd find that we have our lip policy tried, already packaged and included with our chart. Finally, we need to change the name of the name template in our deployment to refer to that one provided by the library, tried to save the file and run the dry run command to see degenerated manifests. If we take the deployment resource, we'd find that the image pull policy has automatically been set to always. Let's run the command again, providing an environment value of data. And we have the image pull policy set to if not present. So if we applaud this library tried to a shared location like web server or a network share. We can very easily utilize our image pull policy named tablet without needing to redefine it in our chart. Finally, let's update the burden on our chart since we've added new functionality which is using library charts. And that brings us to the end of this lecture. See you in the next one. 23. Build your own Helm repository: Hello everyone. And distorted discourse. We use the Artifacts hub to find and deploy our Enter next chart. Artifacts hub is just a curated list of helm and other products repositories saved in one place for ease of access. Let's have a look at the Bitnami repository from where we downloaded our Internet. And later on, our right is charged. Let's try to access the Bitnami URL from our browser. There is no default webpage to be displayed, so the server denied our axis. But if you add slash index.html to the URL, we will find that the server is responding with the file and we can download it. Let's do that. Every Helm repo must contain that index to a YAML file. It is a well structured file that contains lots of data about the various stores that this rapid holes. Let's have a look. The file storage with the API version than the entries list, which contains the remaining contents of the pile. Bitnami is a well-known library that contains many charts. So this file is thousands of lines long. Let's search, for example, for engineers. Some of the important parameters to notice here are the name, the burden, the advert, and the description. And most importantly, the your L load a package at chart is located. Notice that the chart may or may not be located on the same server where index to a YAML is. So this URL May to an S3 bucket, for example, or any other web location. But in most cases, charts in the index to the YAML file coexist on the same server. When we won one of the Helm repo commands, like Helm repo search for example, this file is read and the information that's holes is used to display the command output for you when we run helm install Azure next 0 on Bitnami in slash and genetics. And since this wrapper is already added to our system, this URL is used to physically download the chart and deploy it. Now, what if you wanted to create our own held repository to serve our charts? They hold repository consists of three components. One or more charts. Index.html file containing the data about the charts and our web server. Let's create a directory that will hold our repository and call it my repo. Now in the my app directory where you want the helm package again, this time passing md hyphen, hyphen destination command line flag, followed by their relative or absolute path to my repo. Now, my repo contains our package at chart. We need to create the index dot YAML file. It's just a text file that you can create with any text editing tool. Helm already has a sub-command for this one, Helm repo index, then the relative or absolute path to the Repo directory. And index dot YAML file has been created for us in D, my repo directory. Let's have a look at what has been generated for us. We have our char name and metadata like the name application, virgin chart, virgin description and so on. Now we have a chart and it index file. Let's create a web server. When it comes to deploying a web server that are almost unlimited options for us from using the already available software like Apache web server and genetics or others to coding our own web server with a modern language like Python, javascript Go or others, to using one of the various available online services like Cloud offerings. I think the simplest and fastest way to get a web server up and running is to use the Apache Docker image. So from inside the VI repo directory, we run docker, run hyphen D, hyphen PAT, column AT hyphen v, the current working directory, column slash usr slash local slash Apache 20 slash docs HTTPD, which is the image name. The command started, and Apache web server in a Docker container exposing port 80 for incoming connections from the host and mounted the current working directory to the container so that Apache serves its files from our repository. Let's double-check that we have our index.html file and a package into charge by navigating to localhost. And we have our repository ready. We can start using it immediately by adding it to the system the same way we did before with the Bitnami rhabdo, Helm repo add, let's call it My Rebel http colon double slash local host. Since it's available, it will be subject to our Helm repo commands. For example, Helm search repo, myapp. We have my app when the description and Virgin. Let's remove the existing my upheld release and install and my app release from our newly created rapidly helm install, my app 0001, my repo slash myapp. And we have our helm release deployment. Now what if we have a newer version of our chart and we needed to be included in direct way as well. Let's update the chart version of my app in Detroit or YAML file. Then repackage our triad again, passing in the destination directory to be my reco. The same command that created the index dot YAML files can be used again. It will recreate the index file with all the charts that it can find in the repo directory. Next, we need to run Helm repo update to allow him to recognize the changes. And if we want to hold surge repo might app hyphen hyphen gradients. We can see that the new burden now shows in the output and we can use it. In some situations though, the index.html file is not located in the same directory as the wrapper. So running Helen reco index will overwrite the existing index file. Let's have an example. Assuming that we are using a CI CD pipeline to package our chart and add it to a global index dot YAML file. Let's simulate the situation by creating a directory called job, where the CICD to all will place the package at helm chart. The CICD tool will generate a package and Helm chart and the directory tree also have an index dot YAML file, which would typically had been called from a Git repository or some Artifactory. Now if we want Helm repo index, the existing file will be overwritten and it will only contain the newer version of my app. To overcome this situation, we first rename our original index dot YAML file, but something else, say index hyphen original dot YAML. Now let's see what happens if we want Helm repo index. The index for YAML file was overwritten and now it only contains one version of the chart. But if we want to Helm repo index dot hyphen, hyphen, merge index, hyphen, original lot YAML. The new charge will be added to the index file as it should, but the contents of the index 5 and original dot YAML file would also be merged with the index.html file. No data is lost. And that brings us to the end of this lecture. See you in the next one. 24. Hosting your Helm repo on a web server: Hello everyone. This lecture is a bonus one. It is not required to learn helm, and you can freely skip it if you want to. In the previous lecture, we learned how we can create a helm repository. We use a local web server to access the record. However, if we want to share our chart with the rest of the world, we need to place it on a publicly accessible web server. I will use S3 here as a web server, since it is a simplest setup and among the cheapest options, if you want to run it at scale. And important notice to make first before proceeding. This video is explaining how to use S3 as a web server on which we can host a helm repository. When we access the bucket, we will be accessing it over the regular HTTP protocol. However, if you want to use S3 as an object storage using the helm S3 plugin. This will be discussed in a later video in this class. You'll follow with me in this video, you need to have a working AWS account. Ideally with admin access. The first step we need to do is create the bucket using AWS, S3, bucket S3, we call it my charts. Then we need to convert this bucket to a web server. So we want an AWS S3 website, then the bucket name, and we specify the index and edit documents. Finally, we need to apply the required IAM policy to allow this bucket to serve its content publicly. Nowadays, bucket is a web server. Whatever files and directories that you upload it as bucket will be publicly accessible. So we start uploading the contents of my repo directory, the index dot YAML file, and the package at charts that we have. Let's open the browser and navigate to the web address of our bucket. Aws constructs the web address for buckets by adding S3 hyphen website followed by the region than Amazon AWS.com. Obviously, we are presented with the default error page. It says we didn't request a specific file. Let's add slash index dot YAML to the URL. And as you can see, the server is responding with the file. If we try to download one of the back-end it charts the server is also sending a valid response. Let's now use our Helm reco add command to add this record to our system. Let's call it my web charts. Let's ensure that it has been added. And if we search for my app in the list of records that we have, we find that it is correctly referenced by our newly added rapidly. So this was a bonus video in which we use S3 as a web server to host our helm repository and make it publicly available to the public community. That brings us to the end of this lecture and see you in the next one. 25. Helm repo hosting on Chartmuseum: Hello everyone. This is the second bonus lecture in our class. In the previous lectures, we could host our charts on a local and public web servers. However, as your charts grow larger in number, you will find it hard to manage them. For example, you need to access the remote web server whenever you want to add, delete, or update your charts. And you make a change to the charts, you will need to recreate an upload the index to a YAML file. If you want to rely on a cheaper storage for your chars on and then the server disks, like one of the Cloud storage offerings, you need to manually configure the webserver to contact the external storage, which is not the easiest task. Will robustness. You also need to enable authentication and TLS, which adds to the complexity and needs more time and effort, let alone having to maintain all of the above. Chart museum is an open source project that was created under the umbrella upheld written in Go. The chart museum server provides an API interface where you can upload or removed towards it can be deployed as a standalone binary in a Docker container or even as a Kubernetes helm chart of its own, the index.html file is automatically created and maintained for you whenever a changes made to the back end. Robert truck Museum can use a number of storage options for a back-end. For example, a local file system and AWS, S3 buckets, Google Cloud Storage, microsoft, either Blob Storage, Alibaba Cloud storage, openstack object storage. In this video, we'll see how we can deploy chart museum to our local server will use a local storage as a backend first. Then we'll utilize an AWS S3 bucket as a cloud backend storage example. Communication to the server will be protected. Tls users will need to authenticate with a username and password to make changes to the record, but everybody can download towards. So let's get started. First. Let's see what we have. We have a couple of charts stored into my repo directory next. And since we will be using TLS to secure communication between the users and a server. So we need a certificate anarchy. In a real-world scenario, you should have your own verified certificate authority and use it to issue and distributor server certificates and keys. For our lab will create and use a self-signed certificate and key. The following command will automatically generate a self-signed certificate with one-year validity and it's appropriate key. Now we're ready to try the chart museum container will be using Docker to deploy chart museum. So we run the following command. Docker run hyphen md, hyphen P for a port 8443, column 8443. You can choose whatever available port you want. But since we intend to use DLS, it's a good idea to select a pore that contains 443 to indicate that you are using a secure connection. We best hyphen e port equals 8, 4, 4, 3, 2 instructor Art Museum to listen on port 80 443, hyphen e, storage equals local that you find the title back-end storage that chart museum will use, and it's the local file system. Then we specify the route dire inside the container from which chart museum will serve the charts. We set this to slash towards. Next, we need to instruct our museum to use TLS. So we pass the DLS third location and key. We need the server to authenticate users before they add or modify charts. Chart museum supports basic authentication as well as bearer token. We will use basic authentication here, so we pass the required user and password using basic AAC user and basic auth fast environment variables. Then we mount the directory that contains our package it charged to slash charts path on the container. Next, we mount the key and certificate from our host to the container. Now it has the image name char museum slash char museum. One last thing we need to do before running the container is past the hyphen, hyphen of anonymous GET command line argument to the container. The purpose of this argument is to instruct art museum not require a username and a password from users who download charts. Credentials would be provided only if a user wants to add or delete a chart. If you would rather want users to provide credentials when they download charts as well, you can omit this argument. Now let's run the container and make sure that it is running in the background. The first thing we need to do is make sure that the program is healthy by issuing a get request to the health endpoint. We'll use curl here, but you can use any HTTP client of your own, like boast man for example, we pass hyphen K to prevent the curl from cracking the server certificate since we are using a self-signed one. Then we pass the URL to our museum server predicts that by HTTPS then slash health. We know now that the server is running without issues using curl throughout the rest of this lab. And the JSON output will be displayed on the command line in order to display Jason in a more readable format and possibly filter and manipulate the output will use a little nifty tool called j q. Let's quickly install it. Now let's say that we want to list the charts that we have available in our museum. We issue a GET request to the slash api slash charts endpoint as follows. And to make the output easier to read, we repeat the command piping the output to j q. Since we have J2 handy, we can use it to even filter the output to the point that we need. Repeat the previous command, passing the following to take you between single quotes dot myapp than double square brackets. Since it's an array, we are interested in showing only the burdens of the charge. So we add dot Dan Burden. And now we only have the burdens of the chart that are stored into our museum. Now assimilate that we want to upload a new version of my app to chart museum. First, let's create a package and newer version by updating that chart burden in dot YAML file and running Helm package.com museum accepts new charts as the body of a post request to this slash api slash chores endpoint. We can make this HTTP call using curl as follows. Hello, hyphen L, I think gay hyphen, hyphen data binary. Then we pass in the file name of our packages chart, preceded by the sign hyphen yield to supply the user credentials. Then we pass in the full URL to the endpoint. The server responds with saved, true. If you're running this process and a pipeline, you can easily intercept the JSON response from the server to determine whether the process was a success or a failure. Now let's double-check that the new chart version is available on the server by repeating the curl command. So we have a Tron repository available for use. We need to add up to our rapid list. We run Helm repo, add, let's name this wrap-up chart museum. Then we supply the URL http, localhost 8443. Notice that we are not specifying the slash api slash charts endpoint, but only the root URL. We also need to add hyphen, hyphen insecure, skip TLS Mary by two, the Helm repo add command, since we are using a self-signed certificate. Otherwise helm, well, try to validate the certificate and it will fail. Run Helen rapid list to verify that we have the repo installed. And I will need to remove the my web charts tripled from the previous lecture since it's no longer available or required. And we run home run update. Now let's search for the myapp chart and verify its burdens. As you can see, it helped them listed with all the burdens that we have in the my repo directory in addition to the one that we had uploaded. Notice that we never had to create or update the index for YAML file since this has been already done for us by Trump museum. Now let's assume that we want to remove one version of our chart. This can be done by Sunday, a delete request to chart museum. We need to specify the char name and version in the URL tool. Let's see. Using curl, we run curl having k hyphen you, and we supply our credentials, username, and password. I've been EX, Delete to specify the HTTP method that we need to use, then you are elderly endpoint. Again, we never needed to worry about the index dot YAML file. Now if we run Helm repo update, then we search for my app chart. We see that the burden that we have deleted is now gone. Again whenever needed to worry about the index for YAML file. Finally it Let's run truck museum with an AWS S3 bucket as the back end instead of our local file system. If you want to follow with me, you'll need to have an AWS account with admin privileges on SVG. You also need to have an S3 bucket that contains the package at Helm charts that you are willing to avail. I've already created an S3 bucket on my AWS account and uploaded charts there. I will kill the existing container now. Yet my AWS credentials and start a new container with the required telling Docker run. I've indeed Ivan be 84438443, hyphen e board equals 8, 4, 4, 3, HIV in E storage equals Amazon. I have an E storage underscore, Amazon underscore bucket, which is the bucket that we store our charts in. Hyphen e, storage underscore, Amazon underscore prefix is nothing. And this is an inner layer stink P1P2 prefix to objects inside your bucket, some string. In our case, we don't have that, so we supply an empty value. I can either storage Amazon region, which is the region in AWS that we are currently using. It it is US East 1. I can ETL assert we pass in the cert dot BAM file. I can DLS key because in our key or PEM file, hyphen e, basic old user and basic goals bass. Then we supply our AWS Access Key, ID and secret. Do we able to access S3 on AWS account type in AWS to fall region US East 1 hyphen v slash search, RBM colon slash search dot PBM. I can be BAM Collins, left key to BAM, Tech Museum, trot museum, column latest, and the off anonymous GET command line flags so that we enable the users to our old charts without having to authenticate themselves. And that was quite a long command on us today. Let's double-check that the container is running and check itself. And we'll look at the available charts. We don't need to add or remove the Helm repo since it's listening on the same address, we just need to run Helm repo update. Then we can search for our MyApp truck burdens. And we have the same output as before only that the back-end storage now is S3 instead of the local file system. I highly recommend that you have a look at Troy Museum documentation at museun.com slash docs, since it contains examples and working with other storage providers like Google and Azure. Additionally, you can learn about using bearer tokens for irrigation in instead of the basic host method. I also encourage you to use Redis as a caching servers is dropped museum by default stores the index dot YAML file in memory, which is not the best option in large environments. So in this lecture we learned about chart museum and open source project that provides a helm chart repository with an API interface and how we can easily enable TLS and basic authentication on it. Additionally, we learned how it can use our local file system as well as AWS S3 as an example of integrating chart museum with different storage providers. That brings us to the end of this lecture. Thanks for watching. 26. Helm S3 plugin (AWS): Hello everyone. In this section we are going to start exploring some advanced features of Helm. So in the previous lectures, we'll learn how we can use helm as a Kubernetes package manager. In this section, we'll learn how we can extend it to make it even easier for us to work with and manage charts. We start with defining Home plug-ins and how they work. They held plugin is a piece of code that is used to add more functionality to help. There are many community offered plugins that can be found in github.com slash topics slash helm plugin. If you want to install a plugin, you can do that using Helm plugin install followed by the URL to the plugin. For example, let's install the S3 plugin. You can also select a specific version to be installed by adding hyphen, hyphen virgin to the end of the command with the required version. Once installed, you can double-check that it is part of your available plug-ins. Now running helm plug-in list. A home plugin adds a sub-command with its own options and plugs to provide the functions that the plugin offers. For example, the S3 plugin has the hyphen, hyphen help flag, which prints a short informative message about the different options to the plugin exposures. So as you can see, plugins augment the existing features of helm by adding a new functionality without touching the original source code. Now, let's see what we can do with the S3 plugin. This plugin allows you to upload charts to an AWS S3 bucket. If you've watched the previous two bonus lectures, you would know that we've used AWS S3 twice. First, when we use the web hosting feature of S3 to create a simple observer where we hosted our charts. And the second time is when we use the S3 bucket as a back-end storage for Trump Museum. If you haven't watched the previous two lectures, then that's perfectly fine as they are not required to follow along in this lecture. This plugin allows you to use S3 natively as a charge storage. You don't need to enable public access on the bucket to use it as a web server. And you don't need to install third-party software like chart museum to be able to use S3 as a back-end storage. To follow along with me in this lecture, you will need to have an AWS account with admin access to S3. You should also have AWS CLI installed. You can refer to AWS documentation for installation instructions. I have already created a bucket called my charts. The bucket is empty. The plugin relies on the credentials used by AWS CLI to contact S3. So we want helm S3 and S3 Column double slash my charts. If we have a subdirectory in the bucket and we want to use it as the chart repository instead, we can add it to the URL. For example, S3 colon double slash my charts slash my dire. If we check the contents of the bucket now we'd find that a file was created named index.html. File is empty since we haven't pushed any charts yet. So let's add this S3 bucket as opposed to our list held rebel add, let's call it my S3 plugin repo. Then the URL to my charts in the S3 format as three colon double slash my charts. Notice here that we are not using the HTTP protocol as before. We're rather using the SV colon double slash prefix. Custom protocol is provided to us by the plugin. Now that we have our repo configured to point to the S3 bucket, we can easily push one of our existing package. It charts as follows. Helm, S3, Bush my repo, myapp, my S3 plugin repo. If we check the contents of the bucket again, we'd see that the chart has been added and also the index dot YAML file size got bigger since it was updated with the new charts info. The good thing is that we don't even need to run Helm repo update after uploading the chart since the plug-in already handles that for us. So if we want our search command to view the different versions and locations of this chart, we'd find that it's already stored on the my S3 plugin repo and ready to be used. We can also remove the charge from the repo using the plugins delete sub-command. For example, helm S3, delete myapp, hyphen, hyphen burden, 0.2.1, my S3 plugin wrap-up. Notice that we needed to supply the chart name, then the Virgin through the hyphen, hyphen virgin flag. Instead of just apply the filename. Again, the index.html is automatically updated so no need to run the helm rapid update. If we search for the chart now we'd find that it's gone. To uninstall a home plugin. You can use the uninstall sub-command. First, let's remove the repo, Helm, repo, remove my S3 plugin repo, then helm plug-in, uninstalled S3, and we're done. Notice that this does not delete the bucket nor its contents. This should be manually done by you. So when should you use the S3 plugin? When you want to restrict your chart management and usage to an organization or an entity. Since you need to have a valid AWS account to use S3, this solution is not suitable for chairing charts for the community. For that, you could set up S3 as a web server instead. When you need a cheap Cloud-based storage, or when you want to offload authentication and authorization to the Cloud provider instead of handling it yourself. That brings us to the end of this short lecture. Thanks for watching. 27. Build your own Helm plugin (helmscp): Hello everyone. This is another bonus lecture in the class. It contains some advanced techniques that are not essential to work Wilhelm. So feel free to skip it if you want to. In the previous lecture, we'll learn about Helm plugins and how they augment the functionality that held provides without modifying its source code. In this lecture, we learn about how to build your own hell plug-in, install it on your system and share it with the community. A home plugin is made up of the following. The directory containing the file called plug-in dot YAML. Optionally a program or script to run. This means that you can build a fully functional helm login without writing code or creating a script. The trick lies in the plug-in dot YAML file. Let's have a look at a sample one provided in the official documentation. The name specifies the plug-in name that you'd use when calling the plug-in. So the S3 plugin that we saw in the previous lecture was named S3. So you call it as hell S3 than the rest of the command. In this example, the command should be run as held last, followed by any subcommands or flags that are needed. Next, we have the Virgin like Helm. It follows semantic versioning scheme. The usage specifies how this plugin should run, and the description contains the text that will appear next to the plug-in name when you are run held plug-in list, ignore flags is a Boolean value where you can decide to send flags to the plugin or not. For example, if we want helm S3 deletes my app hyphen, hyphen Virgin, 0.2.1. If this value is turned to true, then the hyphen, hyphen verdant flag will not be passed to the S3 command. Then we have the command and the platform commands both specify what the plug-in would do once it is invoked. The platform command provides different commands for different operating systems and architectures. The reason why we have several versions of the same command is dead. Maybe you have an obligation that is meant to only one on a Linux machine to run it on Windows, you can fall another Virgin for that OS type. Same thing applies to Mac OS. When the plugin is invoked, Helmut first searches for a combination of the OS and architecture in the platform commands. If one is matching, the command will be executed. If all of them are not matching, that it folds back to the command specified here. If this command is missing as well, Helm exits with an error. So since we are running a Linux, let's have a look at the Linux with the AMD 64 architecture. In this example, the plugin is not using an external program or script. Instead, it's just using the Helm command with some subcommands and flags that will get you the name of the last release installed on the system. You may notice that we don't see Helm command. Instead we have an environment variable called Helm underscore bin. This is one of several environment variables that Helm injects and makes available to a plug-in code. The helm underscore bin variable is used to get the location of the helm binary as it is set by the user. This command is exactly similar to a running helm list hyphen, hyphen, short hyphen, hyphen max, one hyphen, hyphen, date, hyphen r. If we try to run it using the environment variable, it would fail because as mentioned, this environment variable is made available only to the plug-in itself, as we will see in a moment. So to illustrate how easy it is to build and water hell plug-in, Let's copy the contents of this file. Create a new directory under my charts called my plug-in. And in a file called plug-in dot YAML, we paste the contents of the file and believe it or not, that's all what's needed to build a helm plugin. To install it, we use the same command we've used in a previous lecture, helm plugin install. Then this time we don't give it the URL to the plugins Git repository. Instead, we just pass the file path to the directory where the plugin exists, which is the current working directory. Now the plugin is installed. If we want to help plug-in list, we see that we have our plugin called last installed. And the description that we saw in the plug-in dot YAML file is printed next to it. We can use it just like any other help plug-in using Helm are followed by the plug-in name. Last, we don't have any subcommands or flags, so just run it like this. And we have last deployed held release returned to us. Held plugins can be as simple as just running a Helm command with some flags. It can be also as complex as you want them to be. It all depends on the program or the script that you want to run. For the sake of demonstration, I created a helm plugin called SCP. Its purpose is to automatically package and push Helm charts to a remote server over SSH using secure copy or SCP. The user supplies are required parameters through command line flags as follows. A path to a chart directory, the username on the remote server, the remote servers IP address or host name, the private SSH key used to login to the remote server. Password based login is not implemented. Optionally, the port and the remote file path with the file will get stored. What the script does is that it uses the helm package command to package to charge from the directory passed to it. Then it uses the SCP command to upload the package at the chart archive to the remote server. It's available under a GitHub repository that I made specifically for this class. If we have a look at the blue window YAML file, we'd find that it's quite similar to the one we've just seen on the documentation page. Only the command part is different. For this plugin, I used a shell script which gets invoked when the plugin is called. Let's have a quick look at that script. I will quickly explain the steps followed into subscript. The source go to the entire plugin is available in the resources part of this lecture. So you can examine it in more detail if you want to. We define a function called usage, which displays a message to the user. If the user misses one of the required flags, then we start accepting the required flags from the users. So for example, hyphen asks for the remote host hyphen L for Detroit directory having gay 40 SSH key and so on. Then we check for the presence of the required flags and enter our displaying the help message if one or more of them is not provided. We set some defaults for the optional parameters like the port and the SSH key path, we make sure that the remote die or path does not end with a slash. Then we check for the existence of the helm underscore bin environment variable. This step is needed so that we can test a script outside helm plugin environment. If we are working from within the helm plugin, then this variable is set for us and we can use it. Otherwise we default to the installed hilum binary file. We can get it by running the which Helm command from the shelf. Next VIP package that charge using the helm binary, get the chart filename from the command outputs. Then use the SCP command to upload the chart archive to a remote server using the parameters is supplied by the user. Finally, we clean up by deleting the vial once it's been uploaded successfully. As mentioned, you can make this more complex and support more use cases if you want to. For example, it can accept a member of charts instead of just one at a time and loop through all of them at once. It can also manipulate and index dot YAML file on the remote server, like what the S3 plugin did. So as you can see, the counselor, Good to see you provide a command or a script that does some functionality. And Helm invokes it for you. To demonstrate the platform commands boards. I created the Go program that does the same thing as the script and compiled it again OS, Linux, Windows, and Mac OS. So we have three binaries available using the OS and arc parts of the plugin dot YAML file, we specify which command matches which combination. To demonstrate descript and ago binary usage. Let's clone this repo to our machine. Now we need to upload a chart to a remote server in our lab. First, let's create a private public key pair by using the SSH key gen command. Accept the defaults, then copy Jackie to the remote server using the SSH copy id command, followed by the IP address of the remote server. Now we can login to the remote server using our SSH private key. Let's start by testing the script outside the helm plugin environment. We go inside the Cloud, repo Enron been helm, SCP dot SH, hyphen k. Then the pilots do the SSH key, which is the default anyway, hyphen L, my charts, myapp, which is the chart directory hyphen you Ahmed, which is the username hyphen ECE 19 to 16, 8 to 16 4, which is the remote server IP address, hyphen r slash home slash, which is the remote location where we want our chart Saved. Finally, hyphen P 22 for SSH port, which is the default as well. We run the command and from the output we can see that the charge was back at it and upload it to the remote server. Let's double-check by logging into the remote server and checking for the presence of the package that chart. Let's try the same process, but this time with a goal binary. So we delete the file and log out from the server and repeat the same command. But this time we invoke the Go binary that was compiled specifically for our platform linux, AMD 64. And we have the same output. Let's look at back to the server and ensure that the charge was uploaded. And here it is, Let's delete it. So now we are confident that descript and the program work well on their own. It's time to use them from a helm plugin context to install this plugin. As always, we want helm plugin install. Then we have one of two options, either to supply the URL to the Git repo, or to supply the file path to the local directory containing the plug-in code. For a change, let's supply the URL to the GitHub repo where we store our plugin code. And let's ensure to the plugging got installed by running helm plug-in list. Now we can start using our plugin. We supplied a same command line flags as we did with the script and the goal binary. So we just replace the command name withheld, SCP. If we login back to the server and check for the presence of the file, we'd find that it's been uploaded successfully. This plugin is not as complex as the S3 plugin that we've seen in the last lecture. It can be used like a shortcut to backup your charts on a remote server over SSH for added security. So in this lecture, we learned about Helm plug-ins on how they are built. We also demonstrated a practical example of how we can use Helm plug-ins to automate packaging and uploading Helm charts to a remote server using the SSH protocol over secure copy. That brings us to the end of this lecture. Thank you for watching. 28. Use custom protocol for Helm chart downloads: Hello everyone. This is another bonus lecture in our class. It contains advanced techniques and it is not strictly required to learn helm, so feel free to skip it if you want to. Throughout this class, we've used several helm repositories. We even built our own and hosted it on our webserver. To use a chart from a remote server, you need to download it first. Helm does there for you behind the scenes. So when you use the Helm repo add command and supply a URL, helm automatically contacts this web server and stores information about how to use it to download charts. Natively Helm can communicate with web servers over HTTP and HTTPS protocols. It can also use basic authentication to login to web servers that require authentication. We've seen that already when we use chart Museum in Lecture 25. However, HTTP and HTTPS with basic authentication are not the only protocols that you can use when retrieving charts. For example, if the charts are stored in an S3 bucket held needs to use vendor-specific commands to contact the AWS S3 API using AWS, his own authentication methods, Helm does not support that natively. Similarly, if the charts are stored in an API server that requires authentication through, for example, JWT bearer tokens helm cannot deal with that either. Address these requirements, help support that our loader plugins to define at our limiter plug-in, we add the downloader section in the plug-in dot YAML file. This section is a list, so you can define more than one downloader command. The island contains a command that is the file that will get executed. Then it defines a number of protocols that are handled by this command. For example, it can define a protocol called S3. If Helms parts this protocol in the URL, it does not attempt to download the index dot YAML file and package it charts using the conventional HTTP methods because they obviously won't work. Instead, it passes the whole URL, including the file that it needs to download, the command specified in the download section of the plugin dot YAML file. So in our lab we define a protocol called SCP. When a URL is prefix it with scp colon double slash held automatically invokes the downloader command registered with their protocol. In fact, the protocol name is purely your own choice. So for this lab, we chose as Cp, since we will be using secure copy to upload and download chars from the remote repo, but you can name it whatever you want. For example, 4-bar, if you notice, we are also adding a new section to the plugin dot YAML file called Hooke's. Hooke's define custom commands or scripts that are invoked by helm whenever you run home plugin install or helm plug-in update. The purpose of those commands is to instruct helm as to how to perform their program install or update. So in the previous lecture, we stored our plugging command in a directory called bin that lives alongside the plugin dot YAML file, recompiled three different versions of the binary for Linux, Windows and Mac OS. In this version of the plugin, We don't store it at binary files. Instead, we instruct Helm to use the install that SH command whenever it wants to install or update the plug-in installed that SH scripts checks for the OS type and accordingly contacts at our load URL where it can get the correct program burden. The reason why we had to do it that way and not just stored a binaries as before, is that Helm does not offer a platform commands section for the downloaded program as it did with the main plugging command. So as you can see that I will recommend is just one option and it must work on the OS where home is running. Using an installation script makes it possible to download different application versions depending on the underlying operating system for the main plugging command, as well as the Downloader one. Now let's discuss that I will recommend in little more detail. As a matter of fact, helm allows you to use the same command as the main plugging command and also as then I will load or one. Another method is to build two dedicated commands. One for performing with a plugin is meant to do, for example, upload charts to a remote repo over SCP. In the second command is specifically used to far down, we're loading the index for YAML file as well as any packet at charts or from the remote repo. But if you want to configure your plug-in command to handle the plugin tasks as well as the downloader tasks. Helen provides an important note. When helm calls that our literal command. It passes the URL to the resources that it needs to download, but it also passes some extra arguments. So for example, if the repro URL is scp, colon double slash admin add 19 to 16, 8 to 16 four. Then when helm wants to download the index to YAML file, it invokes the helm SCP command as follows. Helm as CB search file, the file CA file, then the full repo URL followed by index.html, the surge file, a key file and see a file are brought from the helm repository cash. By default, it is stored in dot config slash home slash repositories dot YAML. The purpose of those files is to supply the necessary values that hell may need if the remote server requires mutual TLS, that is. It requires the client to present a certificate to be able to access the resources on that server. If the report does not require mutual TLS, those valleys are kept empty. Nevertheless, they are always passed on to download a plugin, even if they contain empty values. So based on that, and if you want to use the same command as a plugin command as well as at our lunar one. You need to check for the presence of the URL as the fourth command line argument to decide whether the command will act as the main plug-in command or as at our lunar one. For the purposes of demonstration, I have refactor the helm SCP code to make it work also as a download or Command. So if we want to use the command to upload a charge, we don't need to supply the parameters through command line arguments anymore. Instead, we will use the DHCP protocol. For example, helm, SCP, push my charles, my app SCP are met, add 19 to 16, 8 to 16 for slash home slash admin. The parameters that we supply it through the command line arguments in the last lecture are now extracted automatically from the URL. So the power that comes after the push sub-command is local transport. Anything before they add sine is the username followed by the host name, then the path to the remote location where the charge is saved. The SSH key defaults to dot SSH slash ID and our score RSA on your home directory. But you can override it by using the sep underscore key environment variable. As usual, the refactored code is available in the resources part of this lecture. So you can have a look at the source code if you'd like to. Let's now examine how our program works as a download, a plugin, as well as the main plug-in commands. First, let's login to the remote server where we will be hosting our charts. We need to make sure that Helen is installed and that server because it will be used by the plugin command. Let's also create a directory that would be used to store our chores. For example, my repo. Now let's update our SCP plugin to make use of the new features. Using our repo for the first time, we need to initialize it to create the index dot YAML file. Helm as CB and CB are mad at 19 to 16, 8 to 16 for slash, home slash, slash my wrap-up. Then add our repo using Helm repo, add my SCP repo as CPM at at 19 to 16 8 to 16 for slash home slash slash Mirabeau. Since our program acts as a dollar loader plug-in as well, helen recognized the SCAP protocol and pass the URL to our program and ask it to download the index for YAML file so that they can be used. Let's up the burden of my app. Now we can use the plugin as follows. Helm, SCP, push my charts, myapp, SCP, Achmat at 19 to 16, 8 to 16 for slash, home slash slash. My repo. Our charge was back at it and pushed to the remote server. The index dot YAML file on the remote server has been updated automatically. Let's double-check by logging into the server and viewing the contents of the index dot YAML file there. Now, if we search for our MyApp chart, we will see that it has the correct version and is stored in our newly created wrap-up. Our plugin also supports stroke deletion. So to delete our charge from the repo, we need to pass the following command. Helen, SCP delete my app, which is the chart name, hyphen, hyphen Virgin, which is the burden that we want to select for deletion, then the repo name. We don't need to specify the full URL to the repo since the plugin gets that automatically from the repo name. Now if we search for our chart, we'd find that it's gone. The index dot YAML file and the remote server has been also updated to reflect the changes. So in this last lecture, we learned about that one donor plugins and how they can be used to work with custom protocols that require specific steps that were low charts. That brings us to the end of this lecture. Thanks for watching. 29. Helm starter charts: Hello everyone. Throughout this class we use the helm create command several times to create boilerplate charts. A boilerplate chart is one that has the essential components that you can customize to make that chart worked for you. For example, it contains the chart dot YAML file, a templates directory containing nodes to text the helpers that DBF file, a deployment service, also a charged directory for some charts and a values file. But in some cases you may find yourself doing a lot of changes to the boilerplate template to match best practices or some policies that your organization enforces. Let's have an example. When we won Helen create my Web Starter. Helen generates the necessary files for deploying Andrew Next on a Kubernetes cluster. Engine X was chosen since it can be regarded as a model cloud native application that can be further customized the fit almost all needs. But if we have a look at the templates directory which see that it does not contain e config map template. Most applications need to store their configuration to be customizable. For Andrew, Next, we can create a config map that contains some sample HTML files. Let's do that. The template will be called config map dot YAML, and it would contain the following API version, kind, metadata, name, labels. Then comes the data part where we will be defining our files. Let's define the homepage, index.html and white some sample content. This is the homepage. Now let's define a second page gold about the HTML and write some content there to save the file. Move to the deployment template, Volume Mounts name. And the melon path is the default directory where Andrew next serves its web files. Then we define the volumes section. The name is config volume, and it's a ConfigMap with the name we defined for our config map. Save the file and deploy this tort. Activate the port forward command, open the browser and navigate to index.html. We can see that we have our content displayed. And if we navigate to about HTML page, we also see the content that we've added earlier displayed. If you are using under next to deploy a number of static websites, we will have to recreate the ConfigMap and define it in the deployment each time we want to create a chart for our website, we need a way to speed up the creation of charges like the one we've just built, hell and provides start or charts to address this requirement. A starter chart is just another helm chart, but it is used as a boilerplate when you run the hill and create command. Let's turn discharged to a starter charged. The first step we need to do is replace all the references to the chart name with char name in all caps between angled brackets. We can simply do that on the shell isn't a command like the following. Find in the current working directory, any file, then execute whatever comes next. Said hyphen I, the name of the chart with chart name in all caps and between angle brackets. Then end the find command. A quick tip for Mac users, you will need to pass a pair of single quotes after the hyphen I. Otherwise the command will not work. Let's double-check that the chart name was replaced. The next thing we need to do is place this chart in a special directory that telling users to find and implement starter charts. This directory you should live under a home environment variable called Helm data home. To get this directory run Helm, helm underscore data underscore home. And inside this directory we replace a subdirectory called starters. This directory will contain all our stores or charts. Let's move our chart to that directory. Now, the user at all, what we need to do is bounce the hyphen, hyphen starter flag to the helm create command followed by the starter chart name. For example, helm create hyphen, hyphen starter, my Web Starter minute app. This command will create a chart called my new app that is based on the my Web Starter Boilerplate chart. If we browse the files inside destroyed, we'd find that we have our config map created and correctly reference it in the deployment. Also notice that it tried namestring got replaced by the generator to have the chart name, my new app. Now we have a boilerplate for our website. We don't need to create a reference, the ConfigMap, as this has been automatically done for us. So in this lecture we learn about Helm starter choice and how they can be used to speed up charge creation using a pre-defined chart template. That brings us to the end of this lecture. Thank you for watching.