Hands-On Guide to Argo Workflows on Kubernetes | Jan Schwarzlose | Skillshare

Playback Speed


  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Hands-On Guide to Argo Workflows on Kubernetes

teacher avatar Jan Schwarzlose

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

59 Lessons (6h 55m)
    • 1. Course Introduction

      6:20
    • 2. Minikube Installation - Introduction

      1:48
    • 3. Minikube Installation - Windows Installation kubectl

      1:58
    • 4. Minikube Installation - Windows Installation minikube

      1:38
    • 5. Minikube Installation - Windows Select a Hypervisor

      0:36
    • 6. Minikube Installation - Windows Hyper V enable

      3:19
    • 7. Minikube Installation - Start minikube with Hyper V

      1:52
    • 8. Minikube Installation - Virtualbox Installation

      3:55
    • 9. Minikube Installation - Start minikube with Virtualbox

      2:25
    • 10. Core Concepts - Changes

      0:43
    • 11. Core Concepts - Installation Argo Workflows

      3:19
    • 12. Core Concepts - Argo Server Interface

      1:45
    • 13. Core Concepts - Hello World Workflow

      2:29
    • 14. Core Concepts - Core Concept

      2:08
    • 15. Core Concepts - Template Definitions

      1:16
    • 16. Core Concepts - Container Template

      3:46
    • 17. Core Concepts - Script Template

      2:45
    • 18. Core Concepts - Resource Template

      5:19
    • 19. Core Concepts - Template Invocators

      0:51
    • 20. Core Concepts - Steps Template serial

      5:59
    • 21. Core Concepts - Steps Template parallel

      3:35
    • 22. Core Concepts - Suspend Template

      3:50
    • 23. Core Concepts - DAG Template

      6:25
    • 24. Core Concepts - Exercise1 Introduction

      2:15
    • 25. Core Concepts - Exercise1 Solution

      16:41
    • 26. Workflow functionalities - Output Logs to MinIO

      10:54
    • 27. Workflow functionalities - Installationn ArgoCLI

      3:22
    • 28. Workflow functionalities - Input Parameter

      19:07
    • 29. Workflow functionalities - Script Results

      7:19
    • 30. Workflow functionalities - Output Parameter

      4:53
    • 31. Workflow functionalities - Output Parameter File

      7:17
    • 32. Workflow functionalities - Artifacts

      9:52
    • 33. Workflow functionalities - Secrets as envrionment variables

      10:20
    • 34. Workflow functionalities - Secrets as mounted volumes

      8:46
    • 35. Workflow functionalities - Loops

      4:52
    • 36. Workflow functionalities - Loops with Sets

      8:16
    • 37. Workflow functionalities - Loops with Sets as Input Parameter

      9:08
    • 38. Workflow functionalities - Dynamic Loops

      12:10
    • 39. Workflow functionalities - Conditionals

      10:58
    • 40. Workflow functionalities - Depends

      7:02
    • 41. Workflow functionalities - Depends Theorie

      2:09
    • 42. Workflow functionalities - RetryStrategy

      12:02
    • 43. Workflow functionalities - Recursion

      12:28
    • 44. Workflow functionalities - Exercise2 Task description

      5:44
    • 45. Workflow functionalities - Exercise2 Solution

      56:11
    • 46. More Concepts - Resource overview

      1:30
    • 47. More Concepts - Workflow Template

      8:11
    • 48. More Concepts - Cron Workflow

      11:04
    • 49. More Concepts - Cluster Workflow Template

      4:18
    • 50. More Concepts - Reference to Workflow Templates

      5:35
    • 51. More Concepts - Creating a master workflow

      13:42
    • 52. More Concepts - AWS S3 as artifact repo

      12:13
    • 53. More Concepts - AWS S3 as default repo

      4:54
    • 54. More Concepts - Archiving workflows

      8:50
    • 55. More Concepts - Namespace

      4:07
    • 56. More Concepts - Service Account

      7:31
    • 57. More Concepts - Exercise3 Task description

      1:45
    • 58. More Concepts - Exercise3 Task solution

      20:11
    • 59. Summary

      1:53
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

19

Students

--

Projects

About This Class

Argo Workflows is a container native workflow engine for orchestrating jobs in Kubernetes. This means that complex workflows can be created and executed completely in a Kubernetes cluster.

It provides a mature user interface, which makes operation and monitoring very easy and clear. There is native artifact support, whereby it is possible to use completely different artifact repositories (Minio, AWS S3, Artifactory, HDFS, OSS, HTTP, Git, Google Cloud Service, raw).

Templates and cron workflows can be created, with which individual components can be created and combined into complex workflows. This means that composability is given. Furthermore, workflows can be archived and Argo provides a REST API and an Argo CLI tool, which makes communication with the Argo server easy.

It is also worth mentioning that Argo Workflows can be used to manage thousands of parallel pods and workflows within a Kubernetes cluster. And robust repetition mechanisms ensure a high level of reliability.

There is already a large, global community that is growing steadily. Just to name IBM, SAP and NVIDIA. It is mainly used for machine learning, ETL, Batch - and data processing and for CI / CD. And what is also very important - it is open source and a project of the Cloud Native Computing Foundation.

Upon successful completion of the course, you will be able to create complex workflows with and without cron triggers using the different concepts and workflow functionalities. You will be able to create workflow templates and use them as reusable building blocks for complex workflows. And you get to know and apply the Argo features.

Meet Your Teacher

There are so many cool tools out there - especially in the small / large / big data area. One life is not enough to know all tools and be proficient with them. But even with a quite small and good toolset, you can implement great projects with real value.

In 2012 I graduated as engineer for mechatronics. Programming especially in the embedded area was an important part of my education. During my first years as an engineer, I discovered more and more my passion for Python, especially with small / large / big data.

After a few hobby projects, I took the step to work professionally in this area in 2016. I've been working now for years as a data engineer being involved in great projects.

I like to pass this knowledge on through courses in data engineering and data scien... See full profile

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Course Introduction: Welcome to the course hands-on guide to Argo workflows in combinators. My name is Yang shots law, the end. I will guide you through the course. I already have several years of professional experience as a data engineer. After having dealt with Python and the processing of small to large amounts of data as a hobby for a long time. Over the years, I have been involved in numerous industrial projects for small, large, and big data. These weren't the classic data warehouse area with relational databases and in the data engineering environment with data lakes and modern big data tools. The subject of orchestration and workflows always plays a central role. Which is why I was able to gain project experience with both Argo workflows and other orchestration tools on the market. What is our workflows? Argo workflows is a container native workflow engine for orchestrating jobs in communities. This means that complex workflows can be created and executed completely in a Kubernetes cluster. What are the main features of Argo workflows? It provides a mature user interface which makes operation and monitoring where we easy and clear. There is made of artifacts support whereby it is possible to use completely different artifact repositories, including minnow, AWS, S3, Artifactory, HDFS, OSS from Alibaba Cloud, HTTP GET Google Cloud service, aura. Templates and Chrome workflows can be created with rich individual components can be created and combined into complex workflows. This means that composability is given. Furthermore, workflows can be archived and Argo provides a rest API. And then Argos CLI tool, which makes communication with the Argo server really easy. It is also worth mentioning that our workflows can be used to manage thousands of parallel pots and workflows within a Kubernetes cluster. And robust repetition mechanisms ensure a high level of reliability. There's already a large global community that is growing steadily. Just to name IBM, SAP, and NVIDIA. It is mainly used for machine learning, ETL, extract, transform, load, batch and data processing and for CICD. And what is also very important, it is open source and a project of the Cloud Native Computing Foundation. When you have completed the course, you will be able to create complex workflows with and without cron triggers using the different concepts and workflow functionalities. The functionalities include input, output parameters, use of artifacts, and Kubernetes, secrets, conditionals, definition of dependencies, loops, Recursion, and retry strategy. You will be able to create workflow templates and use them as reusable building blocks for complex workflows. And you get to know the Argo features such as the user interface, which can be seen here in the video. As we can clearly see a workflows being executed that you will create yourself and run of the course exercises. And with the user interface, we then have a very nice way to graphically monitor and operate this workflow. After a brief introduction, at first, you will install a mini cube class, the locally, which we will use to run our workflows in the next chapters. Then you take the first steps with Argo workflows and get to know the core concepts. For each core concept, recreate a workflow together. And finally, there's an exercise for you to close the chapter. The following chapter deals with further workflow functionality such as input, output parameters, parameter files, artifacts, Kubernetes, secrets as environment variables and mounted volumes. Loops with sets and input parameters. Dynamic loops, conditionals, definition of dependencies, recursion, and retry strategy. For each functionality, we create a workflow again in order to gain hands-on experience. And there's another final exercise waiting for this chapter. And the fifth chapter deals with further leading concepts and functionality such as workflow templates, cluster Workflow Templates, Chrome workflows, referencing templates, creating master workflows using namespaces and service accounts, using AWS, S3 as logging and artifact repository and archiving workflows. And here too, you will solve another exercise that will close the chapter. Who is this course suitable for? Basically for everyone who wants to use a coordinate, this native orchestration tool to create simple and complex workflows. And for everyone who wants to get to know most of the features of Argo workflows for creating large workflows. Using a Practical Approach. Previous knowledge is basically not required, but a little basic knowledge of communities could not hurt. So I wish you a lot of fun and success with my course. 2. Minikube Installation - Introduction: Hello and welcome to the chapter. Create a mini cube cluster. What is actually many? Many cubed is a tool for creating a single node Kubernetes cluster locally on your personal computer. With mini cube, you can try out Kubernetes Service or use it for daily development. What are the components we need? We need Cube CTL. This is a command line tool to interact with the Kubernetes cluster. You can deploy applications, inspect and manage cluster resources, overview locks. And we need Mini cube. This is the actual tool to create the single node Kubernetes cluster. And we need to Hypervisor. This is the software that creates and runs virtual machines. Depending on the operation system. There are different hypervisors out there. For Windows, we have VirtualBox and Hyper-V. For Linux, we have VirtualBox, KVM for Mac OS, we have VirtualBox or VMware Fusion and hypermarket. In this chapter, I provide detailed instructions for creating a mini cube cluster on a Windows 10 machine. If you want to use Linux or MacOS, just take a look on the website could be knit this I0. There, you'll find detailed instructions for installing Mini cube cluster. It's just as easy as with Windows 10. Just have fun sitting up the Mini cube cluster and see you soon. 3. Minikube Installation - Windows Installation kubectl: In order to install Cube CTL on a Windows machine, I open the link that was provided in the instruction document. And here I go down to the Window section, install Cube CTL on Windows. And here we have download the latest release from this link. On this link we want to click. And then we downloaded Cube CTL xA. Now I go to my Downloads, take this cube CTL x-a, and go to C Program Files and create a new folder. I call this folder Cube CTL as well. And inside this folder, I put this Cube CTL. Excellent. Now I take this path and go to my environment variables, edit environment variables for your account. And here I searched for path, edit, new and paste this path to our Cube CTL. Okay? Okay. And now basically the installation of Cube CTL is done in order to check this, we open a command window. And here we can type Cube CTL version client. And here we have our Cube CTL. 4. Minikube Installation - Windows Installation minikube: Now we want to install Mini cube on a Windows machine. In order to install it, I opened the Mini cube GitHub repository. And here on the right side we see the releases. And with the tech latest, this is our version we want to install. So we click on it and then we go down and find the Mini cube installer XA. We download it. Just takes a moment and then we just run it. Okay, In English. Next, I agree. Install. Next, Finish. So now many cubed should be installed on your machine. And to check this out, we open the command line and type just Mini cube. And here we see the help. So if we want to find out what version we have, so we type Mini cube, as you can see it here, version, just version and here we have our version, version 1.16. That's it. 5. Minikube Installation - Windows Select a Hypervisor: Now we have to select a hypervisor. We have the choice between Hyper-V and virtualbox. Virtualbox you just can download and install Hyper-V. You cannot download Hyper-V is an optional feature built into Windows, but it is only available for Windows 10 Enterprise, Pro and education. It is not available for Windows 10 Home Edition. In the next lectures, I am going to show you how to use Hyper-V and VirtualBox, but it is sufficient to choose only one. 6. Minikube Installation - Windows Hyper V enable: To use Hyper-V as a hypervisor on a Windows 10 machine. At first, we have to enable it in case it is disabled. In order to check if a disabled or enabled, we have to open the command line as an administrator. So I press the Windows and R and the run window is opened. We type CMD and then we press Control Shift and Enter. Yes. And we open the command line as an administrator. And to check whether Hyper-V is enabled or disabled, we type in system info. And there we have a section Hyper-V requirements. In case the requirements are listed. The Hyper-V is disabled. In order to disable. To enable it, we type in BCD, edit slash sit, hypervisor launch type. And we set it to order. The operation completed successfully. In another step we have to take is we open Apps and Features. So by right-clicking on the Windows icon here, we can choose EPS and features. Then we go to operational features. We choose more windows features. And then we enable Hyper-V. Here. We click Okay. It just takes a moment after it asks for reboot. That Hyper-V is finally enabled. And I will just pause the video and definitely have to reboot the system and see rough that, uh, we boot. The reboot is done now. So let's open a command line window and type system info. And here we see Hyper-V requirements. There it is written. A hypervisor has been detected. Features required for Hyper-V will not be this plate. This means that Hyper-V is now enabled and we can use it. Also. We check the Apps and Features, optional features again, more windows features and the Hyper-V here is enable. This means now we are ready to go with Hyper-V. 7. Minikube Installation - Start minikube with Hyper V: Now we can use mini cube with Hyper-V hypervisor. To do so, we open the command line as an administrator, Control Shift and Enter. And then we type Mini cube. Start. And there are many cube creates a cluster with Hyper-V hypervisor. This just takes a moment. The Mini cube clusters now create it. And here it is written done Cube CTL is now configured to use mini cube cluster and default namespace. By default. This means we can now use Cube CTL to work with the mini cue clusters. So let's do it. I type Cube CTL, get notes. And this commands lists all the available nodes in the cluster. So here we have mini cube status, ready roles, age, and version. And finally, we also can stop the Mini cube cluster. With mini cube, stop. It just takes a moment and the cluster is stopped. That's it. 8. Minikube Installation - Virtualbox Installation: If we want to use virtualbox as a hypervisor on Windows 10 machine, we at first have to make sure that the Hyper-V is disabled. To check it, we open a command line window as an administrator and we type system info. And here we can see I preview requirements. Hypervisor has been detected. This means that the hypervisor is enabled, so we have to disable it with BCD, edit slash sit, hypervisor, launch type of the operation completed successfully. Another thing we have to do, We go to Apps and Features, optional features, more windows features. And here the Hyper-V we disable. Okay? And now we have to reboot the system. So here it is. Asking to restart. The reboot is done now. So let's check whether the hypervisor, Hyper-V is disabled. To do so, let's open a command line window and type system info. And here we can see under the Hyper-V requirements section that the requirements are listed. This means that Hyper-V is disabled now. Now we can download VirtualBox and install it. So let's go to the Virtual Box website, virtualbox.org to Vicky and downloads. And there we have VirtualBox platform packages. And we choose Windows hosts. We just download it and run it once it's finished. Here we go. I keep it as it is. Create a shortcut on the desktop. I don't need to create a shortcut in the quick launch bar. I also don't need next yes. Install. Yes. And we can also launch it. Finish. And here we have our VirtualBox for now we are ready to go. 9. Minikube Installation - Start minikube with Virtualbox: Now we can use mini cubed with VirtualBox. Let's open the command line and start mini qp. Qp. So what we can see here is that it's trying to use Hyper-V as the hypervisor. Because before we use Hyper-V and created a mini cube cluster. So there's an existing profile. This we have to delete with mini cube, delete. So now everything is removed, removed all traces of the mini cube cluster. And now again, we tried to start Mini cube, and now it should automatically detect VirtualBox and use it as a driver. Here we can see automatically selected the VirtualBox driver. So just takes a moment, I just pause the record. Now the mini cue clusters created. And here we can see done Cube CTL is now configured to use mini cube cluster and default namespace by default. So now we can use Cube CTL to work with the mini cube cluster. So we type the command Cube CTL, get notes to see what nodes we have. And there we can see the Mini cube note. And that's it. If we don't need it anymore, the Mini cube cluster, we can stop the molecule cluster. So let's wait until it stopped. And that's it. We are now ready to go. 10. Core Concepts - Changes: Hello. Before we start the Argo installation, I want to talk about changes that we've done since the course was recorded. In order to communicate with the August server user interface, https instead of http is used. Now, this means that you have to type https colon slash slash before localhost 27, 4, 6 in your browser. These are all changes to know. So have fun with the arguer installation. 11. Core Concepts - Installation Argo Workflows: Now we want to install Argo workflows on our mini cube cluster. To do so, I already started the mean Q cluster and opened another command line window. And it first, we want to create a new namespace named Argo. To do this, we type Cube CTL, create an S for namespace Argo. And we can check this Cube CTL, get namespaces in this interior we can see, okay, the new namespace was created. And now we type Cube CTL. Apply minus n in our Argo workflow. Minus F stands for file. And I just copy, paste the URL to the GitHub repository of Argo workflows for a Quickstart. And now it installed Argo workflows. Now we can check with Cube CTL minus n, go get ports. So here we can see what ports are installed and what components. So we have the workflow controller. This is definitely necessary. It controls the workflows. Then we have the Argo server. It provides the communication. So all communication we want to do, create, delete, or whatever resources. It goes through Argo server, and it also provides user interface. Then we have mean I0. This is an object storage. There we can save the locks and artifacts and we have Postgres. This database provides the functionality that we can our current workflows for longer time. And finally, we want to for the Port of the Argo server in order to be able to access the UI. So we type Cube CTL minus n R group port forward and deployment server. And the port is 27462746. 12. Core Concepts - Argo Server Interface: Having the port forward for the Argo server active, we can open a browser and go to localhost 2746. And there we opened the Argo UI. This, we don't mind that it's written, unable to load data. It's not interested for us. So here on the left side we can see we have the timeline. Here. Later we see all the workflows that run or run. Here we have Workflow Templates, the same. Here we can see all the workflow templates that we created. Basically workflows, Workflow Templates, cluster workflow templates. And also workflows are just communities, resources, and all the different types of resources. We can see here. It's listed right now we don't have any resources. Then we can see the archived workflows. We have reports to find costly or time-consuming workflows. Then we have information about the user. And we have API documents to see what operations we can do with the R-word server. And finally, we have a help section. 13. Core Concepts - Hello World Workflow: Let's deploy and execute our first workflow. I just took the Hello World example from the Argo website that looks like this. I copied it into my local folder. And I copy the path, open the command line, and do cd into my directory. And take a look there we have our hello world example. And we deploy the workflow with Cube CTL minus n go. We deployed inside the Argo namespace. We've create minus f and workflow. Hello World. And there we have it. Do workflow is created. And we can go to our Argo server UI. And here we can see already the workflow, the workflow it's executing. When we go to it. Take some time probably to pull the image. So it's pending. So let's just wait until it's finished. It's finished now S, we can see here it's succeeded. And here we can see all necessary information's. And we can see here the Yammer. So the definition of the workflow, we can see the template. We also can see the logs. So This is the logs. It's just logging helloworld as we expected. And here the whale. And yes, here we can see events containers. So this is the container that is used. It's Docker, we'll say. And with the command cowsay arguments, Hello World. Here we can see output artifacts. This is just locks. Yes, and that's it about our first workflow. 14. Core Concepts - Core Concept: Let's talk about the core concept of a workflow definition. The definition of a workflow is done with yum. Yum list, a human readable language for data serialization. And here we have our recently deployed Helloworld example. And let's just go through it. So here we have a kind of a header. We have to determine the API version. We have to say what kind of company resources we want to deploy. In our case, we have a workflow. Then we have the metadata. Here we generate a name, we've generated name, in our case hello world. And then we have the most important part, it's the spec. So here we have the specification of the workflow. And What? We always have their two sections. One section is templates. Here we define all the different kinds of templates we want to use. And we have the entry point. The entry point determines what template we use first. So we determined the entry points inside the templates. A template we determine or we define by using a hyphen followed by the name. In our case it's Wayne, see. And here, then we use the container template. And then we just define, okay, what image we use, what command and what arguments with arcs. So that's the basic concept, the core concept of the workflow definition. 15. Core Concepts - Template Definitions: Before we move on to the practical part, I want to introduce the different types of template definitions. There is the container template. This is perhaps the most common template type. It was scheduled a container to spec of the template is the same as the Kubernetes, this container spec. So you can define a container here the same way you do anywhere else and carbonate this descript template is a convenience wrapper around a container. The spec is the same as a container, but it's the source field which allows you to define a script and place. Descript will be saved into a fire and executed for you. The resource template performs operations on cluster resources directly. It can be used to get, Create, apply, delete, replace, or PECC resources on your cluster. This suspend template will suspend execution either for duration or until it is resumed manually. In the next few lessons, we'll take a closer look at the different types of templates. So see you there. 16. Core Concepts - Container Template: Hello, In this lecture, we will create a workflow using a container template. Therefore, I already opened the blank workflow definition. So let's start filling it. Here we can see that we already have the header defined. So we've API version and the kind. What we have to fill is the metadata. We generate name. So let's just write the name. And I want to call it workflow container template hyphen. This means that we define a base name. And because of the generate name attribute, it is going to add five alphanumeric characters because each workflow has to have a unique name. So now let's come to the spec. This is actually debate the main part. So there we have the entry point and we have to define the container template. At first. Let's define name. So I just call it container template. And now we can also fill it in the entry point. We just say, okay, our container template will be the entry point. Now, let's define the container. And we use just the image Python 3.8 slim. You can use any, any other image. So I just use this image and I want to just echo. Did container template was executed successfully. A simple echo command is just sufficient for us for the sake of executing the container template. Let's move on and create workflow. This we can close. Let's copy the path where we keep the workflow definition and open a command line window and cd into our directory. With ls, we see that there is our YAML. And now let's Cube CTL minus NR group create minus f workflow container template. And there we can see it's created. Now let's go to our Argo server, your eye. And here we can see that the workflow already executed successfully. Let's take a look, closer look to the logs. And here we can see the container template was executed successfully. Well done. 17. Core Concepts - Script Template: Hello. In this lecture, we want to create a workflow using a script template. I already opened that blank workflow definition. And let's fill it. So here we can just write our name we want to generate. We just call it workflow, hyphen, script template and hyphen. And the entry point. We want to use the script template. And this. We also right here. And now let's come to the actual script template. So here we write script. Then we define the image. We again use Python 3.8, slim. Then we define the command. This time I want to use Python. And here we have the source to be defined. At first we type the vertical bar or the pipe operator, and then retype print. And let's output the script. Template wars executed successfully. So now let's move on and create a workflow. This we can close. We copy the path, open the command line CD to our path. There we have our script template YAML. And let's Cube CTL minus r minus n, Argo create minus f workflow script template YAML. And that is created. So let's take a look to our Argo you. And here we can see that the workflow is just executing and already successfully executed. Let's take a look to the locks. And here we can see, as expected, the script template was executed successfully. 18. Core Concepts - Resource Template: Hello. In this lecture, we want to create a workflow that is creating another workflow using the resource template. So let's just begin. Here. We again have to define our name, at least the base name. So let's just call it workflow. Wwf, hyphen, resource, hyphen template. And again hyphen our entry point. We just name resource template. This is how we call our template resource template. And now let's begin defining the resource template. And this starts with the resource keyword. Now we have to use action. The action will be created because we want to create a workflow. And now we have to define manifest. Here we use the vertical bar. And then we start defining our workflow that should be created. So at first, this in the same way we do it here. Now we have to start with the API version. And we use the same R group project io slash. We want alpha one. Then the kind is workflow Meta data. In this case, we just use the name instead of generate name. That the difference is that whatever name we choose, there will be no alphanumeric characters. So this is the name, and that's it. And now we come to the spec section. And the entry point, we just choose test template. Then we define the templates. And then first we give a name we already defined in the entry points. So this is our test template. And here we use just the script template with our image. Python 3.8 slim command is Python. And with the source, with the print statement. And here we just want to print out Workflow, Workflow test created with resource template. That's it. This should be fine now, and let's create this workflow. So let's go to our folder or directory where we keep this resource template workflow. And I copy the path, open the command line CD to my folder, to my directory. And there we have it. And let's Cube CTL minus NR. Go create minus f, workflow resource template, YAML. And it is created. Let's go to our Argo URI. And here we can see immediately to create workflows. The first one is our initial workflow resource template. And then here we can see that workflow test was created. Let's jump into the workflow resource template and we can see here is just one task. Let's go back and let's take a look to our workflow tests. A closer look to the locks. And here we can see our logs workflow workflow test created with resource template. So this is what we wanted. A workflow workflow test was created Power BI, our initial workflow using the resource template. 19. Core Concepts - Template Invocators: Before we continue with the suspend template, I would like to introduce the template indicators. These are templates with which you can call other templates. Steps. Template allows you to define your tasks in a series of steps. The structure of the template is a list of lists. Outer lists will run sequentially and inner lists will run in parallel. A deck template allows you to define your task as a graph of dependencies. In a deck, you list all your tasks and set which other tasks must complete before a particular task can begin. Tasks without any dependencies will be run immediately. Now let's get our hands dirty again. See you in the next lecture. 20. Core Concepts - Steps Template serial: Hello. In this lecture, we want to create a workflow using the steps template. With the help of the steps template, we are going to execute several steps in a serial manner. So let's start. At first, we want to name two workflow. So here in this section, meta-data. This case, I use name instead of generate name. When we use name, we have to pay attention that this name is the unique identifier of the workflow. This means whenever you create this workflow, you cannot create another workflow with the same name. So before you want to create another workflow with the same name, for example, you change this template. You have to be sure that the other workflow is deleted. So let's just name it. Wwf, hyphen steps, hyphen templates, and cereal. So then we have the entropy point. We also call it the steps template cereal. And here we can see that we are going to create actually two templates under the template section. The first template is our steps template that we use as a, as an entry point. Templates Syria. And the second one is the actual task template. And let's ask for at first, create the task template. Here, I just use the script template that we already know using the image Python, 3.8, slim command, python and moved source script, printing task executed. This is our task. And now we want to create three serial steps in our steps, template executing this task. So here we are. We have the name of our steps template and then we have to define our steps. So here we are. For each step, we use two hyphens. And then name. I call it just step one. And then we define the template we want to call. And this is our task template that we defined here. The next step we just defined in the same way. We call it step 2. Template, tasks template. And step number 3. Just exactly the same template. And task template. That's it. Here we can see in this way using two hyphens, we defined three steps and they should be executed in a serial way. So let's see this. I can close the path. I copy. I open the command line and CD in this directory. Here we have our step template. Now, well, I, I, I just call the step template not steps template, but, well this is just an S, just as. So. End Cube CDL. If and n are go create iPhone f and workflow step template. And here we go. Let's check our Argo UI and Q. We can see our workflow is executing. Step number 1 is already executed. We can check the locks. There. We have our output task executed. Step two is executed, task executed, and step 3, task executed. And here we can see in a nice way graphically that each step after another step was executed. 21. Core Concepts - Steps Template parallel: Hello. Now we want to take the steps template from the last lecture and add another step. And let's execute step two and step three in a parallel way. So let's look first at another step. Step 4, using just the same tasks template. And we want step two and step three executed in parallel. This is just really easy. We just have to delete the first hyphen and do the indent as it is right now. And now, we can see here we have two hyphens. Two hyphens means they are executed after each other. And then step 3 should be executed in parallel to step 2. And after step two and step three finished, step 4 should be executed. Let's create this workflow. And I go to my directory, copy the path, open the command line CD to my directory. Here we can see there is our Yammer and Cube CTL. Go create this. It already exists. Why actually, I did one mistake. Let's go back to our workflow definition. And here we can see, since we used the same work for definition as we used in the last lecture, they're still the same name. So and this workflow with this name already exists. So I cannot create it a second time. Either I use generic name or I rename it. So here I rename it and call it parallel. Here we also can call it parallel, although it doesn't matter. And then it should work. Let's try it a second time. And now we can see it's created. Let's check here our UI. And here we can see that the workflow is executing. Now step one executing, and here we can see in parallel, step two and step three are executing. Let's, let's just wait for step 4. And we can check the logs. Here. Task is executed. Task executed, and step 4 also finished. And task executed. And that's it. 22. Core Concepts - Suspend Template: Hello. In this lecture, we want to find out how a suspend template works. To do so, we take the workflow definition from the last lecture, where we define four steps, step one and step two, step three, step four, step one, and step two, executed in serial Step 3 and parallel to Step 2 and Step 4, right after step two and step three. Now we want to incorporate a delay of 10 seconds after step two and step three. To do so, we add one more template with the name delay. And for this, we use to suspend template with suspend. And then we choose a duration of ten seconds. And this we just have to incorporate here. We add another step with the name delay. And by the way, maybe I name it, not delay, but delay template. In order to reduce any confusion because the step I also named delay. And here we have to define the template. And there we have the delayed template. And that's it. Let's just not forget to rename the workflow itself. So let's correlate suspend steps template. And here we can rename it steps template. But this actually doesn't matter. So that's it. Let's close it. Here in our directory. Copy the path, open the command line and cd into our directory. And Cube CTL minus r. Go create minus f. Workflow. Suspend. That said, let's check our UX UI and you can see our workflow is executing. Step 1 is executed. Now, step 2 and step 3. Just take a moment. And after this, there should come the delay. We can see duration tree for 56, It's counting to 10. Yes. And right after the delay, step 4 is executed. And that's it. Basically, we can, by clicking on the delay, we can see more information about it. And yes. 23. Core Concepts - DAG Template: Hello. In this tutorial, we want to use the deck template to rewrite our workflow definition. We used two lectures before. If you remember, it was the workflow steps template parallel that they opened here. And here we defined the steps using the steps template. And we had four steps. And these steps, now I want to rewrite using the deck template. So at first, let's rename just the workflow. I call it deck template. And this we also can use here as the entry point and here as the first template we define here. And here we see we have still two templates. One is the, before it was the steps template, now it will be the deck template. And one is the task template. The task template, we don't change at all. It stays as this. So now let's just rewrite those steps. With the deck. Deck consists of tasks. So what steps are in the steps? Template tasks are in the deck template. So here we just have to rename it in deck. And then one thing we should not forget, redefine our tasks. And then we can just start writing our steps. And it starts always Each tasks, each task or each step. In this case, it's a task. It starts with the name as always, and you could also name it step, but I will just rename it and call it Task 1. The template is the same. So we can come to task 2 or 2, step 2, I rename it and call it Task 2. Then the template is the same tasks template. And here we want to define some dependencies using decks. We have to define when it should be executed. If we don't define any dependencies. Once the deck template starts, all. Tasks are immediately executed. But we want to, we want at first the task one to be executed and only after successful execution of task 1, we want to start Task 2. Therefore, we have to define the dependencies. And here it is, task 1. After this, we rewrite step three, rename it to start a task three. And the template is task template and dependencies is task 1 as well. Because before step two and step three, they were executed in parallel. Now, since we defined the dependencies both for task two and task to task one, they also will be executed in parallel. And then we have Step four. We rename it to task four, and we set the dependencies. By the way, the template is the same task template and the dependencies are task two and task three. That's it. We can close it, copy the path of our directory, open the command line, and cd into our directory. There we have our workflow deck template YAML and Cube CTL minus MR go create minus f workflow, it's created. So let's take look to the Argo UI. And here we have our deck template workflow. And here we can see Task1 already executed. And now in parallel, task two and task three. And if we check the logs, task executed the same for task three or task two. And task for as well. So right now what we can see here, it looks exactly the same as with the steps template, but it's a deck, it's a bit different. So it's a directed acyclic graph. You have to define the dependencies, not just the steps. So there's a bit more flexibility, let's say. But finally, the graphical way, whatever you define will be the same. 24. Core Concepts - Exercise1 Introduction: Hello and welcome to the first exercise of this course. To solve this exercise, you have to apply everything you learned in this chapter. In this exercise, you have to do for different kinds of tasks that are defined here. Task a, you have to print to standard out with a script template. Task a executed successfully with script template. Task B is to print to standard out with container template. Task be executed successfully with container template. Task C is to print to standard out with a resource template in a new workflow. And you have to print task C executed successfully with resource template. And there we have task D. This is a time delay of five seconds with a suspend template. And here you can see what the workflow or the workflows look like. Here you have workflow number one. Disk contains off the task after task, after task B again, after this again task B after this task D. And then again task a. And there you have Workflow number 2. This workflow should start after Task a. In workflow one, successfully executed. And this workflow contains task C. And the workflow one with task D should only execute after the task, see in workflow to execute it. There are basically several ways to solve this exercise. So I wish you good luck and have fun. 25. Core Concepts - Exercise1 Solution: Hello and welcome to the solution of the first exercise. I opened a blank file in Notepad Plus, Plus. And I'm just going to start immediately with the solution. The first thing we need is the API version. And there we choose our group project dot au slash. We want alpha 1. Then the kind of the resource is workflow, Meta data. We choose a name, I call it workflow exercise one. And then we have the speck. In the spec section, we have the entry point. I just call it deck template. And we have the templates. And I call it the entry point deck template because my solution will be a DAG directed a cyclic graph. And here we can go with the first template in the template section. And this is our entry point, the deck template. And this is a deck containing tasks. And before I continue writing all the tasks we need, at first I want to write all the other templates we need for the different kinds of tasks. There is task a printing to standard out using the script templates. So let's write this task. For this, we need to define it first name. I call it task a template. And we use the script template as an image. I use Python, 3.8, slim, the command. I use Python. And then the source. We go with the pipe operator. And then R0. Print. Task. Executed. Six is fully with script template. Okay? Then next task we need is. Print to standard out as well, but using a container template, so we call it just task B template. And then container image Python 3.8 slim. And our command. There, I use the echo command and I print out task, be executed successfully with container template. The next one is task C, and we call it just task C template. And for task C, we have to use the resource template and we want to create a new workflow that is printing out to standard out. So they are reuse resource. And then we have to define the action we want to create. And then we define the manifest pipe operator. And here we go with the API version. Go. Project io slash be one alpha. One is workflow. Meta data is name. I use workflow resource template. And the speck. There is the entry point. I call it Resource templates, the entry point. And there's the template section with a workflow named resource template. That is our entry point. And t, I just used a script with the image Python 3.8 slim, as always. And the command python. And you would define the source or script. It's just simple. Print statement has always. And we just print out task C, executed success fully with resource template. And finally, what we need is a delay. Using this has been template. Here we go with the suspend template, I call it delay template. And then we go with suspend. And we want to define a duration of five seconds. So now we basically defined all necessary tasks. And now we can write the deck template. So our tasks in the deck template. So there we go with task number 1. This is using task a template. So after this is task to, the template we want to use is task B template. And we have to define dependencies. Here. We wanted to be executed only after Task 1. So then we go with task 3. The template is our task C template, and the dependency is task 1 as well. Dependencies. Task one. So this one here is just creating a new workflow. So it's just executing this task C, right? So, and after Task 2, there is again task B. So we just call it Task 4, template. Task B, template dependencies. Is. Task 2. Then we can just copy and paste this one. Particular task for a five. Again task B. And this is task for as a dependency. And after this, there is task 6. The template we want to use is the delayed template. And dependencies are Task 3 and 5. So here comes the delay of the five seconds only warns task three and Task 5 finished. So. And then the final task is task 7. Again. It's task a to be executed so we use the task a template and the dependencies IS task six. So basically now we can create this workflow. So let's close it. We go to our directory IF2, find it, it's year. Workflow exercise one. I copied the past the path. I open the command line, that terminal cd into my directory. There we have it. And Cube CTL minus an Argo create minus f, workflow exercise one. And there is obviously problem. So let's find out what is the problem. As written here, there is a problem in line 45. So let's open again the workflow definition and go to line 45. And here we can see there is actually a problem in line 44. We forgot column. So we write it and save it. And let's try again. So Cube CDL again, and now it was created. So let's check our Argo your eye. And here we can see our workflow. Exercise 1 is just running task executed already. There we have task two and task three, so we can check, actually another workflow should be created. So let's go here. And here we can see workflow resource template, ten seconds of duration. And let's go back to our actual main workflow. And here we can see already task 7 is running. So we are, here we have actually everything. So task 1 should be task eight. So task a executes successfully with script template. This is correct here. Three times we have task B. With task be executed successfully with container template. Here the same, and here the same. Then we have in parallel in task 3 that is creating the other workflow. Here we can see the lock. We can check the workflow resource template. What is printed out is task C executed successfully with resource template. And then task 6 should be a delay with a duration of five seconds. And Task seven, here we again have task executed successfully brief script template. So this was our solution of the exercise one. 26. Workflow functionalities - Output Logs to MinIO: Hello and welcome to the lecture about archiving logs to the Object Storage Min ion. Therefore, I already started the Mini cube cluster and open the Google server your eye, and you can see they are still work flows from the last chapter. And if we take a look here to workflow resource templates to this work floor. There we have one task and here we have output artifacts. Here we can see that actually main locks, they our archived two. In this case to the Min IO object store storage. Mean I O Object Storage was installed during installation of the Argo workflow. It comes by default and it is created a port. If we take a look here to Cube CDL minus n Argo get pots. Here we can see that actually there is min IO port. And yes, this we can use to archive or locks. So let's at first take a look to min i o, to the Min IOUs. To be able to access the amino UI, we have two port for what? The Min IO port for the UI server. So therefore we do Cube CTL minus N, r go port for what? Min i, o. And we choose the port mine thousands. And here we just have a typo, so it is port forward and not post forward. And here we go. Now we can go to our browser and type localhost 9000. And here at first, we have to type x is key and secret key. By default, it comes with the, with the, with the access key, admin. And the secret key is just password. And here we are. Here we have our Min IOU in the browser. And here we can see that there is created a bucket called my bucket. And this is the only bucket right now. And here we can see it's totally empty. There's nothing saved. Why is it like this? Because here we could see that actually the main locks should be out put it should be Archive to our Min IO object storage. It is because after executing this workflow here, actually both workflows from the last chapter. I stopped the Mini cube cluster after I restarted it. This means that in the meanwhile, the IO port was just terminated and after started again, this means that all logs that were archived for short, they are not there anymore. So this is total blank I O Object Storage we can save. So what do we do if or what will happen if we execute this? Here? Just again, we can here go to our workflow and then just easily resubmit the workflow. Okay? And here we can see that a new workflow is created with basically the same name, but in this case five Alpha numeric characters. So in order to have a unique name, a unique identifier, we take a look here and then here into our output artifacts. And you can see our main logs. Now we can check our min i 0, we just update this. And here we can see there is a new folder or part of the key, since ObjectStorage our key value storage. And take a look, we go here in our directory, let's say in here with the name of our workflow are saved, the locks. So we can also download it and open it. And here we can see task C executes successfully with resource template. So what if we don't want it to be archived and where can we set it up? Where can we change those settings? So let's go, let's open another command line window. And Let's do Cube CDL minus NR go git config map. Here we can see we have one conflict map, workflow controller config map, and whatever settings we can set. For the Argo workflow controller or for Argo at all. It is here. So here we can just take a look to our config map and cube with Cube CDL, our Go, describe CME for conflict map and then the name of the ConfigMap. And here we have our config map. So all the settings that are made. And what is for us important right now. So we have the section artifact repository under the section data actually. So there's artifact repository. And here we have the flag archive logs. Right now it is set to true. If we set it to false, no locks will be archived anymore. And here we have the section S3. All the subsection is tree. And there is specified the bucket, so which bucket to be saved. And this was created as well, my bucket, the endpoint. And here we have the access key secret and the secret key secret. So basically, for the access key and secret key, secrets, secrets were installed or were created. And here we just define what secrets we want to use. So this came everything by default, but we can change it. So what we are going to change right now we said archive logs to faults. To do so, we do Cube CDL minus NR, go, edit, config map, and then our workflow controller conflict map. Now, the workflow controller conflict map is open and here we go to our artifact repository section to archive logs and we just set it to false, save it, close it. Now we can see conflict map Brook for controller conflict map edited. So we go to our Argo, you go here again and resubmit the workflow again. So here another workflow is created. Let's go to it, Let's wait until it's finished. And we go here to output artifacts. And you can see no data to display because we changed the flag to archive. The archive logs to false. So no logs. Our archived saved to any object storage. The logs us do there for sure. When we go to summary logs, task C executes successfully. We've resource template. And for sure on our mini, mini IO object storage, there is no other logs, no other photo. So let's change it back. Let's do again, edit the config map, set it to true, save it, close it. And again we here. Now you can see, okay, it takes a moment. Just, let's just wait. Yes. So we can resubmit this workflow again, not a workflows created. Let's just wait. And two output artifacts here we can see the main blocks again and go to min i o browser. And you can see with the name NCAM and so on, you can see a mother folder was created. And here we have our mailbox. And that's it. It's just dead easy to archive logs to an object storage with Argo workflows. Think, thank you for your attention and see you soon. 27. Workflow functionalities - Installationn ArgoCLI: Hello, besides the command line tool Cube CTL that we already used, there is another command line tool, especially for Argo. So there is Argos CLI. I opened the GitHub repository for our group projects are go releases. And here we can see there is a detailed instruction for installation of Argos CLI to Mac and Linux. So just follow these steps. It's pretty easy and fast. So for all who work with windows, I will show how to install it on Windows. So here we go down and find this Argo Windows AMD 64, GZ. So we just downloaded. And this I can close. And here we go to downloads. And you can see I already downloaded it before. There's another now. But well, so I take this and I unzip it. So I just extracted to here. And then I take this and I go to Z, to C to Program Files, and I create a new folder. And I call it our goal CLI. Here I move my extracted folder, my unzip folder, and I go inside the directory and Q we have this file, this file we renamed to go dot x. And now we copy just this path. And we go to our environment variables. And I added and environment variables for your account. Here I click and I find the path variable, click, Edit and new. And here I just paste what we copied the path to our Argo Xa. Okay? Okay, and now we can open a command line and just type Argo version. And now we should see the output of the version. It's just this easy to install. Cli. 28. Workflow functionalities - Input Parameter: Hello. In this lecture, we are going to see how we can use input parameters in our workflows. As a starting point, I used the deck template we defined in the last chapter. Just to remind you here we can see that we have a deck template containing four tasks that depend on each other. And we have a script template that just executes a print statement. In what we want to achieve now is that each task just prints out another print statement. So this means we have two opportunities. So the first opportunity is that we create for each task is separate template that executes another print statement. Or we define parameters, input parameters in our task template. And in this way we can just call inside our task instead our deck templates for each task, the same test template, but with another input parameter. So the first step is now to add another section in our tasks template. And this section is on the same level as the name or, or descript actually. And this section is inputs, and a subsection is parameters. And here we want to have a parameter. We just call it text. And inside our source script, we at warn line where we define a variable, for example p. And here we want to use our defined input parameter text. To do this. We have to write it in such a way. So we use those brackets and rewrite inputs, dot parameters, text. So here we can see we have to specify that we use it from the section inputs, parameters and texts. And the right syntax just to not make a mistake, is here we use double-quotes and then we use two curly brackets, right? The next step is now to create another subsection of the spec section on the same level of the entry point. And here we create the section arguments, Then parameters. And here we define, okay, what will be our parameters and our default values. So what do we want to print out? So I just call the first parameter message one. So this is what task one should print out. And here we define the value and we say just Task 1 is executed. So this We do for each task. So here I create a parameter message to with the value task two is executed. Then we will have message three with the value Task 3 finished. And message for with the value. That's it, brief task 4. So now we have four different print statements or four different parameters. And the next step is that we have to define it also in our deck template because right now, okay, we have a tear. In our spec section. We defined our arguments with those, with these parameters. And here we have RR, task template and we defined also the input parameters. Now we have to tell that inset our deck template. We insert tasks. No, Not at first inside the tasks, but here inside our deck template. We also defined inputs and then parameters. Parameter one. We've named Message 1. So here we say, okay, the deck complaint wants to use inputs using the parameter message one. So we just do it for all the parameters. Message two, message tree and message four. And finally, we have to tell inside our tasks. Okay, we want to call the test template. And when we call the test template, we want to use parameters. Arguments actually. So we at here arguments with parameters and square brackets open and we can immediately close it. And then we use curly brackets. And then we say we want the parameter with the name text. So here we tell, well, this parameter here from our test template. It's the name, it has the name text. And this we want to use, and we want to give it the value. And here we have to use the same syntax is here. So double-quotes and then two curly brackets. And then again, we say we use inputs here. Inputs the parameters, that message. So it's here from our section inputs parameters. And then with the name message, the same we do for each task. Now, Task 2, we just use message two. And Task 3, we use Message 3 and 4. Task 4, we use message. For. Now we can save the workflow definition. We go to our directory, copy the path, open the command line, CD20, our directory. And here we have our workflow input parameter deck. And before we were using Cube CDL, now we installed CLI. So we will use ArgoCD line from our own. So to use it, we use Argo minus N r. Go. Submit workflow, input parameter dick. So this is our Argo command. And then as before, we say, we are going to use the namespace Argo. And then there is the command submit to submit the workflow. And let's see what happens. And here we already see one advantage of the Argo, of the Argo CLI tool. It is dead. We get more information whenever we submit a workflow, we already see that there is the name namespace service account status created. And we can see here are our parameters that we defined in our workflow definition. Now let's go to our Argo server UI and you can see our workflow. And we can check the logs. So task 1, there is task executed. Task 2, there's task executed. Test create guesses also task executed. So it's not what I expected now. So something I just did wrong. So let's go back to our work for definition. And let's check here, right? We actually here in our script template, in our script, we created the variable p using the parameter the texts, but we were still printing out tests executed. So here this for sure. I want to print p, The variable. And one thing I notice that I use the name of the workflow with workflow deck template. So I just rename it and I rename it to work flow input parameter deck. Just that we don't get confused or you don't get confused in case you used, you still have your old deck workflow on your Miniclip cube cluster. So we just save it and we go back here. Now at first, we have to or no, let's just try to submit so we can submit it because it has another name now. So let's do it. And here we can see that there's another name, our parameters. So let's check our Argo you I again. And here we can see our new workflow. Now Task 1 is executed. Let's wait for task 2 and 3. We can check already locks. So now there is what we expect. Task one is executed. Let's check the locks of task to task two is executed. Task 3 is finished. And that's it with task four. So now we definitely printed out each task, printed out what we defined as parameters. Now let's go back to our command line and let's see what else we can do with our goal CLI. We can do our go minus n, Our goal list. With the list command, we can see all workflows we do have. Right now there are two. That's exactly the same. What we can see here in our Argo CLI, Argo UI. And let's just try to submit this workflow once again. So here we can see this workflow already exists, so we cannot do it. If we want to change the workflow to submit exactly the same workflow with the same name. Again, we have to delete it. This we do with our grow minus N, r go delete, and then we use this workflow name, workflow input parameter deck. Now it is deleted. And if we do argo minus NR group list, here, we can see that there is only this workflow left. So now we have the possibility to either use the already defined parameters inside our work for definition or we can even override it. We can do like this, Argo minus N r. Go submit workflow, input parameter deck. And then we can use the argument minus p. And we want to override, for example, Message 1. And here we say, well, per meter used from terminal. And let's just submitted. And here we can see already that message one There is used what we just specified here. So perimeter US from terminal. Let's check our Argo UI. And it should be Task 1, right? The logs parameter used from terminal. So and now we have even not a possibility. So we can define a parameter file. So I will just create a new file. And then I say Message 1. It should be parameter one from parameter file. And for example, message three per meter. Or we don't have to write parameter one, but your parameter from parameter file or well, let's do it. Parameter one, parameter three from parameter file. And let's save it in our, in our directory as parameter parameters dot YAML. And now we can just call the same workflow with the parameter file. But before we call it, we have to delete it again. So let's do our go minus n argued delete. And let's submit it again. We've submit workflow and so on. And then we use the R, we use the argument parameter phi, parameter file. And we call the parameters dot YAML. And let's see. Here we immediately can see, we immediately can see that message one is parameter one from parameter phi, message three, parameter three from perimeter file. Let's just check our Argo you, I and t, we can see Task1 already. There is parietal one from parameter file and test three. Parameter three from parameter file. Let's check Task 2. There is task T2 is executed. So this is the parameter from our workflow definition. So that's it about input parameters. 29. Workflow functionalities - Script Results: Hello. In this lecture, we are going to see how we can use the output of a script template in one task, S, the input in another task. Here I opened the workflow definition from the last lecture where we used input parameters. So now what we want to do, we take here Task 3, and we want to call another template, not the task template. We have to define another template where we print something to send that out and using a script template. And in task 4, we just want to use the output of this script template as the input. So for this, let's at first define another template. This template I just call task output. And it is a script template using the image Note 9, 11, LP. So this time I decided to use another image, then always the Python 3.8 Slim. For sure we also can use Python 3.8 slim, but just, let's use this one. So, and we want to call the command note. And with source, the pipe operator. And here we define a variable with var. We call the variable out, and this is a string. And we just want to print, print results. Yes. And then the next command should be console dot, log and out. So just to clarify, in Node.JS, we have to write a semicolon after each command. In Python, we don't need it, so we need it. And printing to standard out we do with console.log. And now we want to call this template inside our task 3 of our deck. So for this, we, let's just see our dependencies stay the same. We don't need any arguments. And our template we want to call is task output. And then in task 4, we keep this dependencies and template as it is. And here as parameter, we have the possibility whenever we use a script template to use this output in another task. So how do we use it? So here we said that we want to use the inputs parameters message for this we don't need. So now we say, okay, we use tasks. So from here, dot Task 3, thus three outputs, taught result. And it's just that easy. So now Task Force should print out what is actually printed out already in task 3 because it is taking as input, the output of Task 3. So here just the, these parameters we don't need anymore, message1 and message2. Here. The same. We can just remove it. Then here we rename the workflow as well. We just call it Scripts result. And here we go. Now we can go inset or directory. I copy the path, open the command line, and cd to our directory. Here we have our workflow script results. And I do Argo minus Argo minus NR go and submit the workflow workflow script result.com. And here we can see this workflow is created workflow script results. And let's see in our UI here we can see the workflow in Task 1. Let's check the logs. Here we have Task 1 is executed, That's wait for task two and task three. So here in task 3, we should see the output print result in our locks print result. And the same for task 4, because this output from task three, we just use here as an input and input should be printed out. Let's check and print result as well. So we could see how to use the print to standard out of Ron task as an input in another task. So we could define multiple tasks and all of them, or just several of them, can take this as an input. What is printed out here on Task three. 30. Workflow functionalities - Output Parameter: Hello. In the last lecture, we used the output to standard out of a script template as an input in another task. In this lecture, we want to explicitly define an output parameter in order to use it as an input in another task. Therefore, I opened the workflow definition from the last lecture with the workflow script results. And at first, I want to rename it to workflow output parameter. And what we want to do is we want to use here Task 3. As you remember, it calls the template task output, where we just print something to the console with console dot log. And here explicitly. And now we want to explicitly define an output parameter that can be used in task 4. So before, we just used the output to standard out as a, as an outputs, parameter, outputs results. Now we define it explicitly. So let's add here another section on the same level as script in our task output Template. And we just call it outputs out puts. So basically it's the same as the input section here, for example. So here we have the output section with parameters and the name. I just call it task param for task parameter with the value task out put parameter. And now what we have to change, we have to tell task for that it should take not the output from the standard out, but from our output sections. So here we have task, Task 3. This is correct outputs, then Section parameters, and then our name of our parameter, task and debts it already. Let's submit the workflow so we can close it or then we go to our folder. My one is here. I copy the path, open the command line, CD to my directory. Here we go. And now I use Argos CLI with our goal minus n Argo and submit workflow output parameter. And here we go. Now let's take a look to our Argo UI. Here is our workflow. It is executing now. And let's just wait. Task 1 is already finished. And let's take look to task 3. In task 3, we have a printout to standard out. Still as in the last lecture print results. But additionally, we defined an output parameter. And this output parameter we use as input parameter in task 4, and this should be printed out. So let's take a look to the lock. And here is what we expect task output parameter. So this, everything works. That we can define our parameters explicitly in tasks and use it as input parameters in other tasks. 31. Workflow functionalities - Output Parameter File: Hello. In this lecture, I want to show you how we can use the content of a fire. Debt is written inside the container of one task. In another task is input parameter. Before we learned how to exchange parameters, defining this parameter as outputs in one task and as inputs in other tasks. So let's see how we create a file inside the task and use it as input in other tasks. Therefore, I opened the workflow definition from the last lecture. And the first thing I want to do is to rename the workflow. And I just rename it to workflow output parameter fire, because we are going to create a file inside our task three here. As you read, as you remember, that, we call the task output Template. And before we printed something out to the console and we were outputting parameter. And now we just want to print something safe, something inside the fire in our container we created with this task. So let's do this. And here, the first thing, the variable we renamed to parameters, so to par. And I just want to write a string, what? That should be saved inside our fire. And this is would ever parameters written to the file. So this can be anything, basically, can be key-value pairs or whatever, Jason, whatever. So then in JavaScript, we want to import a module fs PFI system. We do it with const fs equal. We cryo F S. And this we can delete. And then fs dot right file. And here we have to determine the path where we want to save the file. So we just take temp output 0 to params dot TXT. And I want to write the value of this variable to our file. Well, so we wrote the fire, we saved it inside our container, and now we want to use this value here as an output. The parameter name we can keep as it is. Just one thing. Here. We have to use value from instead of value. And then we say, okay, we want the value from. And we determined the path. We just used the path here of our fire and debts it basically. So now we can use it as input parameters and in every other task after task 3. So in task 4, we just can keep it as it is because we didn't change anything in the parameters, name or whatever. So it's the task, Task three outputs parameters and then our parameter name. So now we can save it, close it, go to our directory. I copy the path, open the command line, CD to my folder. And there we have our YAML definition. And now I do Argo minus N. R goes submit workflow output parameter fire. So it is created. And we go to our Argo UI. Here we can see the workflow. It is executing. Task 1 is finished. Let's wait for the other tasks. Task 2 and 3. Let's take a look and Task 3 to the locks. So before we printed something to the console, now we don't print anything to the console anymore. So here's just a warning. If this warning wouldn't be here than there would be nothing for sure. So the only thing we do is that instead our task 3, we write file with our parameters or whatever we want, whatever content we want to exchange between tasks of the workflow and inside our tasks for we use the content of the file in task 3 as an input. And this should be written to standard out. So let's see the logs and there it is, whatever parameters are written to the fire. So this is a nice way to exchange several parameters or whatever content between several tasks. 32. Workflow functionalities - Artifacts: Hello. In this lesson, we are going to learn how to save files as artifacts on MEN I O Object Storage and how to use these artifacts as inputs in a workflow definition. I open the workflow definition from the last lesson where we defined our task 3, calling the template task output. And here we were just writing the file and outputting the content of the file is parameter. This parameter we used as inputs in task 4 and printing it out to standard out. So now we want to use this file here that we write and outputted as artifacts and save it to the Object Storage, min i, o. And in task 4, we want to use this artifact as input and print out the content of this artifact. So at first, let's just rename our workflow. So we want to just at least I want to call it workflow artifact. So then we can go to Task 3. And here I want to call this template task output artifact. This I copy, go here to our template and I also rename it. So then we go to the output section of this template. And here we have parameters. This we have to change to artifacts. The name, we also change to artifact out. And here we only have to use the path. The path stays the same as it is here. This is just a path of our saved file. So basically, That's it already with this task, no, we are writing a file inside our container and outputting, saving this file as artifact to our Min IO object storage. And now here we've Task 4. We want to take this artifact as input and finally, printing the content to standard out. So here I am going to create a new template, and here I'm calling it. I just, I just call it task input, artefact and tear. And there are arguments section. We don't want to use parameters anymore, but we want to use our defects. The name can be texts, that's okay. But the value is, we have to use from mod value. And here we use task, Task three outputs. But here we use artifacts. And then our artifact name. We use artifact out. And now we can create our new template with the name task input artifact. Then we have to define our inputs. And here we want to use artifacts with the name text as we defined here as well, where we call our template. And then we have to determine the path where the artifact should be saved. So I just use temp txt. And then we just use a script template with the image Python 3.8, slim. Command, python source. I use the pipe operator and then my Python script with open NSF. So here, inside open as argument, I write my path I used here. And I want to open this file in read mode. And I want to read the lines using f dot read. And finally, I want to print the lines. Right now we don't have lines actually, we only have one line here. So that's it. This should work already if I didn't do any mistake. So let's go to our directory. Using the terminal. I just open it, the command line terminal and cd to my directory. Here we have our workflow artifact template. And now I use Argo minus N r. Go submit workflow artifact template. And here we go. Let's go to our UX UI. Here we can see our workflow artifact. So let's just wait this. We already know Task 1 is completed. Then let's wait for task 32 for sure. Interests tree, we expect no locks except as wanting as before. But what we expect is here in output artifacts, we can see that before we only had main logs, but now we have also artifact out from work for artifact us tree as artifacts. So here they are. Oh, there is a mistake, there's an error. Let's see what happened. Failed. That's your logs. Yes, here I definitely made a mistake. I use temp path, but it should be temp texts. So this I can delete. So now we have to go back. Here. I used temp, but this does not exist, so we use temp texts as here. Save it again. Let's do it again. Submit. Now it takes just a bit longer. Workflow artifact is created. And here we can see Task1 again. Let's just wait for task three and Task 4. Let's check our output artifacts as expected. And task for finished successfully. And here we can check our logs as expected, whatever parameters are written to the fire. So this is the content of our fire off our artifact. And let's also check on our Min IO object storage. So let's just update. And here we can see workflow Artifacts. And here we have some sub folders. Let's take look task 3 is here with the last number 37. So here, 37 we can check. And here we can see our main log and our artifact out. So that's it about artifacts. 33. Workflow functionalities - Secrets as envrionment variables: Hello. In this lesson, we are going to learn how to use kubernetes secrets as environment variables inside our workflows. In order to do this, at first, we have to create a secret. Therefore, I already opened the command line terminal. And at first, we have to create a text file where we save our secret. So let's say, let's assume we want to create a passwords. We want to use a password inside our parts, inside our workflows as environment variables. So I do echo password 123. This is the password I just use. And we want to create a test password, file.txt. And now in Windows, when I open the file, we can see it created a new line. This I have to manually remove, otherwise it won't work. Now we want to create the secrets. We do it with Cube CDL minus n Argo, create Secret, generic. And now we have to tell the name of the secret. I just call the secret tests secrets. And then I have to specify the argument from File. And then I tell what is inside the file. Should be used as or should be written as the key. So we have to tell a key value pair. So from fire and then I1 to create a key insight my secret with the name test password. And then I tell the name of the file, or better to say, the path of the file. And since we're already inside the directory where the fire is, we can just write the filename. And we can see tests secret was created. So let's check this Cube, CTL minus N r. Go get secrets. And here we can see all our secrets in the namespace are go. This is our recently created test secret 16 seconds ago. And here we can see other secrets that were created during the installation of Argo workflows. So if we want to get more information about our test secret, we can do Cube CTL minus n Argo, describe secret, and then the secret name test secret. And here we can see that it has the name test secret, namespace Argo. And here the data, we see the key value pair. But without value actually we only see the size of the value, but we see the keys. So now we have only one key here with the name test password. So now we can create our workflow. So therefore, I just opened here the workflow definition from the last lesson. And We have here a deck with our four tasks. So the first two tasks we keep almost as it is. The last task, we don't need anymore task for this I delete and I delete here. The template task output artifact where we use the image Note 9 dot one, LPN or alpine. And we delete. Well, and now let's at first rename our workflow. And I just call it workflow secret nth for environment. And let's change also our dependencies. So let's just say at first, task one and task two will be executed in parallel without dependencies. And Task 3 is dependent on Task 1. And here we call the template, this template I, we name as well. So I just call it task secret n. And here, this will be the template that we call. And let's go to our template. And here the input section we don't need anymore. We only uses scripts. And the image is okay, command is so k, source is almost so K. No, it's not okay. So we want to write something else as scripts. So we do import OS. We said we want to use a secret as environment variables. So in Python we import the built-in library OS for operate, Operating System. And print OS dot and virus. And we want to print the content of the environment variable test passwords. So now we need to define, we need to specify, well the environment revenue test passwords. And let's do this with an environment section. Section. And here we say our name should be test passwords. So here we define the name that we use in our script. And now we say, okay, what should be the value from? We use value from secret key reference. And now we say the name of the secret key. And this war, it was the test secret. And we tell what key we want to use. It was passwords. And that's it. So now we can save it, close it. And here we can do Argo minus n, Argo submit workflow secrets. And let's go to our UX UI. And here we can see our workflow is executing task 1, task 2, finished. And let's wait for task 3. It is successfully finished. Let's go to our logs. And here we can see the content of our key, the content of what we wrote inside our, what we wrote in our fire, and what we keep in our secret key. So it is password 123. So in this way, we easily can use it inside our parts inside or work floor as environment variables. 34. Workflow functionalities - Secrets as mounted volumes: Hello. In the last lesson, we use the cubing it, the secret as an environment variable inside our port. And in this lesson, we are going to use the same key we need to secret, but mounted as a volume inside the port. To do so, I'm going to change the workflow definition of the last lesson according to our needs. So here I opened this workflow definition and at first, I am going to change the workflow name. So I call it workflow secret volume. And here in the spec section, as a subsection, we have to define the volumes we want to use. And I name it test secret, wall with secret. And the secret name. Tests secret. This is our secret name, could be need to secret name we want to use. So here, this is the first step that we have to say, okay, we want to use volumes and using a secret with the secret name. So then we want to go to Task 3 and just rename the template as well. Let's just call it here, thus secret wall as well. And here the same. And then we have to change the template task secret wall. And this time, I will not use the script template. I taught just to use the container template. We all the time using the script template. So I thought just to get not bought, we can use the container template. Basically, we can do the same thing with the container template, just with the script template and submit. More convenient, I would say. But let's see. So for container we use as always, the same image, Python 3.8 slim. And the command we want to use is Python minus C. So here's the first difference. And then we want to use arguments. And here we write our command or our commands. So before I write the commands, we want to do Volume Mounts. We want to mount the volume we defined here. So mounting this volume with the name test secret vote. So this name has to be the name we defined here in the secret vote. Then at most, okay, this is the volume of the secret name, test secret. And we have to define our own path. And here we just choose secrets. So now we know where our secrets will be. And here we can write our command. So I ride with open as F. So Here I want to open the file that is in secrets. And then we have to use actually the key of the secret we want to use. So here we define the mount path. Here we set which secret actually should be used. And here in this directory, we have to tell the secret key. So we want to use the password. And it was test password. And yes. Then we want to open it in read mode. And everything we type here, all commands is basically in one line unfortunately. So as I said, in using the script template is a bit more convenient. So here we want to read the file. And we say, well lines equal to F2 read. And we should not forget the semicolon here in order to separate the commands. And then we want to print the lines. And now it should actually work. Let's save it and go to our directory, opening the command line CD there. And here we have our workflow secrets volume. Just before we submit the workflow, Let's just take a look again to the secrets with Cube CDL minus NR. Go get secrets. And there we have our test secret. Let's describe secret, test secret. And here we have our key test password. So this should be fine as redefined it. And now we can use our go minus and Argo submit workflow secrets volume. And let's take a look here to our Argo URI. This is our workflow just running. So task 1 and task 2 finished. And Task 3, Let's just wait. So Let's take a look to the locks. And here we have our password 123 as expected. So here we just used in task 3, the QBI need to secret. We mounted it as a volume and we, we took it from the mounted volume in our script using the container template and just printed it to standard out. 35. Workflow functionalities - Loops: Hello. In Argo workflows, we also have the possibility to build a loop over a list of elements, and this is what we are going to implement. In this lesson. I open the workflow definition from the last lesson. And let's change this one according to our needs. At first, I want to rename this workflow to work flow loop. And what we don't need anymore is here the volumes section. Let's delete this. Okay, So here we still have task one and task two. And here in task 3, I want to define a loop. So the template we want to call will be the same as here tasks template, but looping over a list of elements. So this means that here, this template, we even don't need anymore, So I just remove it. Now we have to add the arguments section here to task 3, because we want to call the task template as well. So here we need the parameter input parameter text. So let's do this arguments parameters. And there are now actually two ways to write our parameter, either as we do it here or another way is that we Say the name is type text and here is the value. This has the advantage, especially when you want to define several, multiple parameters. So here you can just list all the parameters you need. So what value are we going to use here? At first, we want to loop over this task template using a list of items or elements. So to do so, we define here with items. And then we say element one, element two, element three. And now to use this list, we just use the double-quotes and two curly brackets open and close. And here we use just item. With this combination, we can implement a loop easily. So now we can save it. Let's go to our path, to our directory. I copy the path, open the command line and cd into my directory. And then I do argue minus Argo, submit workflow loops. And our workflow is running. So let's wait what happens? And here we can see once Task 1 finished, there are three tasks in parallel. So the loop is just executed in parallel over the item, over the items. So let's check. Here is Task 3, 0, element 1. And what does Locke say? So there's element one, this is the expected output. Element two and element three. So it's just this easy to implement a loop in our workflow. 36. Workflow functionalities - Loops with Sets: Hello. In the last lesson, we will looping over a template using different items. And now we want to extend this and we want to loop over just items but over a list of sets. Or in a Pythonic way. In Python we say over a dictionary, let's say list of dictionaries. Let's just imagine that we have different tables and we have different methods to extract tables. Let's say we have a Python extractor, we have a PySpark extractor and we have a dust extractor. And now we just give a list where we tell for which table, which extractor we want to use. For sure. Now we are not going to use different extract us, but we are just going to print out which extractor we want to use because we just saved the time to write this big template and the code. So that's why we are just going to print it out. But anyways, we're going to loop and tell which extracts to use. So therefore here we know at first I rename this workflow and I rename it to work flow loop sets. And now we go here to task three and change the items. And we are going to use for each item, for each element is set or a dictionary. So and inside we are going to have extractor. And then for example, we have the Python extractor. And then we have the key table. And here we just say Table one and the same we do for the other elements. Here. For table two, we want to use Pi Spark extract or maybe because the table is just much bigger than Table 1. And here we want to use the dust extractor for table three. And now we have two keys here in each element. So this we also want to use as arguments actually. But our, our task template only takes one argument. So it first we have to create another template. So I just copy and paste it and call it, for example, task loop set. And then we use as input parameters, no texts anymore, but we use extractor and table. So the script template we are going to use the same image. Command is the same. Just here. We want, instead of doing different extraction or playing different extractors, just printing out what we are going to use debt. We can see that it really takes different values. So let's say, Well, applying just a coma, we use a separator. Then we use here our input parameter using inputs, parameters, dot extractor. Then to the table. And again writing our parameter inputs, parameters and table. So finally, here we're just printing applying. For example, if we use Python extractor for Table 1, then here Ruby printed out applying python extract or to the table, table 1. So, and what is left? We have to change the template here in task 3, we want to call task the template task loop set. And here are our arguments. We have to change as well. Parameters. We want to use extractor. So the same name as here. And here. We can just use item dot extractor. It's just that easy. And second parameter, we use table with the value of our item dot table. As we have a TO table and extractor. This is what we use here. And here we use the names of our input parameters inside our template. So let's save it. And let's go to our directory. I copy the path, open the command line, CD to my path to my directory. So here we have our workflow loop sets Yammer, and let's do our go minus and Argo submit workflow loop sets. And let's take look what happens. So here is our workflow. Inside our algo UI, you can see it and let's take a look. Yes, here, as we expected, our tree, our tree Tasks, looping over the, over the sets, over the list of sets. And let's take a look what is in the locks table one extract table. Yes, Let's take a look. And here we can see applying Python extracted to the table, table 1. And here there's applying Pittsburgh extract or to the table, table 2. And here applying desk extractor to the table, table tree. 37. Workflow functionalities - Loops with Sets as Input Parameter: Hello. Now we want to use the same workflow as in the last lesson. Where we, we're looping over a list of sets or dictionaries. But now we don't want to have this list hard-coded as values, as arguments inside our template, but we want to use it as input arguments for our workflow. Therefore, I opened the workflow from the last lesson, and let's at first just renamed the workflow again. And we call it workflow loop sets input parameter. And then here we change inside RR deck template in task 3, we have to change what we are going to use. So this will stay the same. But instead of with items where we define explicitly our list we use with paren. And here we can just say, well, let's assume we have a parameter, the parameter we still have to define. We will do it afterwards. We just say, okay, In our inputs parameters, we are going to at one parameter ingest list. Right now we have parameters, message one, message two, and we have to ingest list. So this we cope B and we edit here. And this actually we can remove, but we are going to use it here in the argument section of the workflow spec. Let's take it and just paste it here. And here. We have to add the parameter as well as a workflow list, basically use SS default values. So and here we at name, call it ingests list. And then we say value. And here we can use the pipe operator. Then we use break it, open, bracket, close. And here we can add our elements or RR dictionaries or sets however you call it. And here the next. And all of them are comma separated. And this is the last element. So last item, this we can remove. And that's it. Now, we are able to use the ingest list as parameter right now here we have the default values. So here in task 3, it's calling the template task loop set. And it's just looping over this list. Actually over yeah, or over the list of our parameter list in just list. And yes, calling this template task loop set. So now let's just close it, save it, close it, and copy the path. Open the command line CD 20 directory. And let's do our go minus N r go and submit work flow loops, input per run. And here we can see our parameters we have in just lists. And let's take a look to our Argo you, I. So here we can see that there was a problem. Let's take a look with parameter value could not be parsed as a JSON list. So what is the problem? Actually? Let's take a look. So most probably it has difficulties to pass as it's saying, the JSON list. So let's go to our list. And let's try double-quotes here, here, here. For each key and each value. Let's try this. So let's save it. This one here, we can delete. So we go to our terminal submitted again. And let's see if it, if it will be successful. So now it's going. So obviously it has no problem now to parse this list. So let's take a look here. And 12 logs applying Patent restricted to the table, table 1, pies, but restricted to table 2. So yes. Now this was successful. And yes, What we have to remember that we cannot just use it the same way as we used to tear in our arguments inside our template, we have to use it in a proper JSON format here. If we want to use it as input parameters is arguments. And now we are able to either submit or parameters by command line. Or even we can submitted the list, this list of extractors we could submit from another task inside our workflow. If we have, for example, I don't know, we have, for example, a database where we, where we have all the extractors for all the tables according to our needs for big workflow, for our big process. And there's, there will be one pass. We can just read this database. And then here, we, then there with this task we outputted as, as parameters as we did it in the other lessons about input and output parameters. And here we just read it as input parameters, this tote list and 10. We can just easily loop over this list. 38. Workflow functionalities - Dynamic Loops: Hello. Now we want to create our loop in a kind of dynamic way. So before in the last lesson, we defined our list of extractors as arguments, as input parameters for our workflow. And now we want to create this list insight or workflow inside a task. And then we can just loop over this list as we did before. So let's change the workflow definition of last lesson accordingly. Here at first I want to change the name of the workflow again. So workflow loop dynamic, I just call it. So here we have our deck template parameters. We don't need the parameter message to anymore. And the parameter ingest list, actually, we don't need anymore too. But let's keep it as it is here because we want to reuse this so I don't want to write it just once again. So then here from parameters inside our deck template, we can remove this. And here, task one, we keep as it is. And then task 2. Here, what we want to do, we don't need arguments anymore. And we want to create and call here a task template or template called task generate list. This task, this template generate lists, we have to define here. So let's go down here. Tasks template, this we need. And here we want to generate the list so we can basically just copy and paste this one. And here we take the name parameters we don't need. So inputs, neat. So let's just print our list here in our script. So how can we do it? At first, we create the list. We create a list, and we create a list of tuples. So we use three tuples. So here we have our Python extractor as a first element. And then we have here Table 1. So in the second tuple, we have pi Spark extractor, Table 2. And inside the third tuple, we have dusk expected and Table 3. So the print statement we don't need anymore. And here we have to actually at first import to libraries. We import the JSON library and the library. And now what I want to do, I want to create this same kind of JSON here out of this list. And this I can do using JSON dot dump. And here I use a list comprehension. And I want to loop for I in list. So I want to loop over this list here. And here for each. We use this. We use this dictionary. And here we say, well, this is the same way we write tier. Here should be written extractor and i of 0. And then we want table as we have a tear. And then we take the first or the second element of I. So this means if we think about, we iterate over this list. So here, when we take the first tuple, means you're tightening stricter and Table 1. So here we are going to write this. And there is this extractor and then i 0, this is Python extractor. And then table with table 11 thing is left. We want to print this to standard out. This we can do if we use here sys.path dot standard arm. So that's it for the template generate lists. And now let's take a look here. We want to take exactly what is printed here to standard out and use it here in task 3. So how can we do this here with parameters, with params? Before we were using this here, the input parameter. So this actually we can delete, we don't need it anymore now. And here we want to use now from our task to the output. So this means this. We can delete. And here we write tasks, tasks to outputs and result. And that's it. So let's take a look. If this works everything. I go to my photo, I copy the path, open the command line, CD to my directory. There we have our Yammer and our go minus N, r go submit workflow loops dynamic. And now let's take a look. Workflow loops dynamic. Here is task 1 and task 2. Here, something went wrong. So let's take a look to our yaml. So what was the reason actually? So here it could not be passed. Well, so let's take a look. So everything basically looks fine, I think so. Let's maybe take a look here. What does our output of task two? This looks good so far. So here we have to square bracket, here we have the curly brackets, double-quotes column. This looks good to me. But what we can see here is that the dependency is actually touch one of test3 and not Task 2. So maybe Task 2 couldn't execute. And here test three was already asking for the output, so extra for the inputs here. So let's change this. So here we want definitely dependencies of task t2 and not task one. So here, let's just delete this one. And then here we can submit this again. And now let's take a look what happens. So task one and task two are executing. And now here we can see that it's looping over the list after task to finish successfully. So now it's succeeded. And here we can see applying piping structure table table 1, applying pi Spark extracted to the table. Table 2. Yes. So that's, that's it. Doing kind of dynamic looping, creating the list inside the task, Task 2, and then using this list. So we can just imagine, as I already mentioned before, that for example, here in task 2, right now we just, we just created the script that we're doing this list. But we can imagine that we connect to a database or to a server or whatever. And from there, we are querying this list and then we use this list here in task treat. 39. Workflow functionalities - Conditionals: Hello. In this lesson, we are going to take a look how we can use conditionals in our workflows. Till now, we used dependencies to make the execution of task dependent on other tasks. And now we want to define additional conditions that we can use to make our execution, our dependencies more flexible. I opened the workflow definition from the last lesson about dynamic loops. And I will change this according to our needs. Let's at first rename our workflow. And here I call it workflow condition, for example. So here are our parameters. Let's at first just remove this. And what we want to do now is we want to have Task 1 as a kind of decision task. So therefore, I want to call in Task 1 a template called task decision. So here this, I also rename them the session. And here we basically can just leave it as it is. So we have our test decision and we have here our input parameter that we are going to use. And here we want to have two messages, two possible messages. So name message a with a value a and message b with a value b. And here are our parameters, are no message a, message b. And now we just kind of simulate that we are taking a decision here. Actually we determine what would be our decision because we tell our decision task with which message to use. So we just say, Well now you should use message a. And then finally, it will be printed out a. And now we can remove this. And now we want to define two different templates. So dis we can remove as well. And yes, Let's say we have a template that is executing task. So I just copy paste this one. Here. The inputs we can remove. Here, I call task a. And here we also print out task. We just print task a was executed. And the same thing we want to do with a task B. We create a template task B, and it's printing. Task B was executed. So based on our task decision here, we want to be executed task a or task B. So therefore, we define here a task a and a task B. So in task a, we call the template task a. So this one, here, this template we want to call. And here in task B for short, we call the template B. This means they are actually kind of place holders now. So we can imagine, okay, task a is doing one thing. And test B is doing another thing. Here just for us. The only difference is that task is printing. Task was executing test be sprinting. Task B was executed. So and now we want to define dependencies. At first, dependencies. So a should only be executed if task one is executed, and the same for task B. So only once Task 1 is finished, then both tasks should be executed, but actually only one task should be executed. And there is the condition that depending on if here is printed a or p. So the condition is if here we print in our test decision a, then we should execute task a. If he has printed BY, then we should execute task B. So how we can do this, we can define now additional conditions with when. So there's a subsection when. And here we define our condition. So we use the tasks. Task 1 outputs, result. So what is printed here to stand up out? And if this is a, we want to call all we want to trigger task. And the same we do here for this task B. So if there is b, then we execute task B. So now let's define a task too. So in addition to the first one, so we have Task 2. And here we just say, well, message b should be printed. And here we can say, okay, this is task a. To also calling the template task a does right away we can reduce. And the dependency is Task 2. And here, Task two outputs if there is a and test B to dependencies too. And if task 2 is equal to b, this means here we have Task 1. So there it's printing out a. This means actually because of our conditionals we defined here, task a should be executed and test B should be skipped. And the same here for task 2. Here we have message b, so that's printing b. So this means task a2 should be skipped and task B2 should be executed. So let's save this. Let's copy our path. Open the command line, CD to our directory. And our goal minus NR go submit work flow conditionals. So let's take a look now to our UX UI. Here we can see workflow conditionals. And here we can see that we have task 1 and task 2 and parallel. And as, as we actually expected in Task 1, what did we output? It's an H, so task should be called. As we can see, a task, a, task a was executed, and in Task 2, we have an output of B. And so B with test me was executed should be called as expected. In here we can see the other tasks, task B and task a2. They were skipped because the conditions were not fulfilled. 40. Workflow functionalities - Depends: Hello. In this lesson we are going to explore the dependance logic of Argo workflows. Till now, we use the keyword dependencies to define our dependencies inside our workflows. With the keyword depends, we have another possibility to define our dependencies. And the advantage of the depends logic is debt. We can even make the execution of one task dependent on whether a nother task executed, successfully, failed or was even skipped. So here, I just opened the workflow definition from our last lesson about conditionals. So it first, let's rename this. By the way here, this I can close. And let's rename it here to work flow depends. So here the section arguments with our parameters and here our template, the template with our input parameters. We keep as it is. But what we want to delete this here, everything after task two and task to include it. And now we, as we remember, we define our dependencies here with the section or subsection dependencies. Now we have the possibility to change this to use the depends logic. So this we want to replace. We use not dependencies, but we use depends. And here we have the possibility then to say Task 1 dot what condition should be fulfilled. So what status of task one should be fulfilled in order to task a be executed. So task 1, for example, we want to be succeeded. And here the same depends. So TSB should only be executed if Task1 successfully executed. So and now let's define some other tasks. So we will define just task C, that is, printing task C was executed and also task D. So that is task D was executed. And here in our deck deck template, we want to add this mono. Let's just write it like this. So name, we want to see. And we want to call the template task C. And we want to define our dependencies debt. Task C should only be executed if task a succeeded. And here we have task D calling the template task D. And it should only be called or executed if a was skipped. And now let's add two more tasks. One is task, let's call it task D2, calling the template task D as well. But with the dependencies or the dependency, that task, Task B succeeded. And the last task, Task C2, calling the template task C, with The depends task, be skipped. So let's save this. And let's close. Copy our path. Open the command line CD inside our directory. There we have the dependency AMO and r minus n. Our growth submit workflow depends. So let's take a look in to our UX UI here we can see our workflow. So task 1, there we have task a and test B is skipped because in Task 1, we printed them a. So that's why only task should be executed. And here we said that depends the dependencies of task C2. This should only be executed if test be skipped. And task C only should be executed if task a succeeded. So in this way, we can have a more advanced dependencies logic. 41. Workflow functionalities - Depends Theorie: Hello. Now I want to give a quick overview on the depends logic in Argo workflows. A task can have different task results after its execution. And using the depends logic, we can make use of these task results. So a task can have the result succeeded. When the task is succeeded, it can be failed, arrowed, skipped, and demon. The deepens logic provides three Boolean operators, end or end. The navigator. When we write depends and only the task name, we get the same result as Depends. With task succeeded or task skipped or task demons. When we use loops, we have the possibility to use task, any succeeded or task or failed. Any succeeded means. If any of the tasks inside the loop succeeded, then we want whatever task to be executed. And fared means if all tasks inside the loop. And we have full compatibility with the dependencies logic. As we can see here, dependencies, task a, task B, C is the same as Depends task a, task B and task. However, we have to remember debt dependencies and depends cannot be used inside the same task group. That's it already about the depends logic in Argo workflows. So see you soon. 42. Workflow functionalities - RetryStrategy: Hello. In this lesson, we are going to explore how we can apply a retry strategy in case a task fails and Q1 to retry this task automatically. I open the workflow definition from the last lesson about the depends logic. And at first, I am going to rename the workflow. And not, not only rename I, we'll not use name but generate name because we want to submit the workflow several times. And I call it retries, trip to G and a dash. And at first I remove what I don't need. And this is the parameter message b. Then here parameter message be. And all tasks B, C, and D. We don't need, so we can remove it here as well. And now what we want to do, so we have the task decision here. We keep it as it is. And we have our task a. We, it will be executed once Task 1 succeeded and outputs a. So the can we have to a, that a executes. And so now we take a look to our task a. And here we just kind of manipulate task. So we remove the double quotes here. And like this, disk task will fail. So now let's assume we have inside our task a, for example, we want to connect to a database or to server, and the connection just failed, so we want it to be retried. We want the task to be triggered again automatically, maybe twice, three times, four times. And so I'm just kind of simulating this with a syntax error. So this will fail task a, and now we have the possibility to apply a retry strategy with the subsection retries. And at first, we have the possibility to specify the number of retries to the maximum number of retries with limit. So at first, I just say, well, once this task failed, I want to have maximum two retries. And then we say retry policy to always, so always, it always should be retried. And then we can define the backoff. And there we add first say, after the first time the execution fails. So what does the duration after how many seconds or after, after how much time to first retry should be executed. So the first time it is 1, second if we tell the duration, is here 10. So by default, the unit is seconds. So this means first-time task a fails, and the next time it will be triggered. So the first retry after 1 second. And then we specify the factor. This means by which factor the next retry will wait. So the first retry after 1 second. So once the first retry failed as well, then how much time it will take to do the next retry. This means by this factor. Two times 1 second, 2 seconds. If we do it here. If we define okay, the limit will be three retries. So then after the second retry failed, it will be four seconds. So always we multiply by this factor, the minimum, the duration we defined here. And then we have a max duration. And I just say this should be one minute. So this means what will be the maximum, maximum duration that this task can be executed? For example, we define ten retries. Saw. If we reach the limit of this maximum duration of one minute, then there won't be any more retries. So for the first submit, let's just start with, with a limit of two retries, one minute duration of one and factor two. So let's save it. Close. Copy the path to our directory, open the command line cd into our directory. And let's submit the workflow with Argo minus n goes submit workflow retry strategy. And here we can see the workflow with this name with five alphanumeric characters was created. Let's go to our your eye. And here we can see Task1 executed. And here we have task a 0 first time fared. We can see here our duration is counting and there is the first retry. This failed as well, and now should be the last retire. We specified only to retries maximum. And the total duration of this task was 30 minutes. So this means what we define that in-between the tasks. So first time, it waited for 1 second here, for two seconds. If we would continue waiting for four seconds and so on to retry. So now let's do three retries and submit again. So here we can see this. This is the new workflow. And let's just wait here. Now is the first time task it should execute. First-time failed. And we can just check the logs. Here. We can see there's this syntax error because we removed the double-quotes here. So second time, the retry, and then a third time here. There is the retry and this should be the last retry. Here we can see it's counting. So 4048 seconds in total it took. So now let's just put here 10 limits, so 10 retries maximum. So this means that actually 10 times can be retry of this task bot. Since we define a maximum duration of one minute. So this here, the duration won't be more than one minute. And let's just submit this new definition. And let's take a look how many week, twice we can do. So let's just wait a bit. And here is the first time task a second time. So the first Greek try. Second retry, third retry. And here we can see 36 seconds. So it's counting. The third retry. It takes now a bit of time because of the back-off to start the fourth retry. Most probably there won't be a fifth retry because it's almost exceeded. Yes. After 58 seconds, the workflow stop because task for failed as well as the fourth retry off task a failed as well. So there was no time to do a 53 try except if we, if we change here the maximum duration. So that's it about retry strategy. 43. Workflow functionalities - Recursion: Hello. In this lesson we are going to explore how we can define a workflow with recursion. In order to do so, we are going to simulate rolling a dice. This time, we just start almost by scratch defined workflow. So let's go to our metadata generate name how we are going to name it. So workflow, recursion and Daesh. And here we tell the entry points. So let's just wait until we have some templates in our templates. At first, I want to have a deck template with a deck containing tasks. And here our entry point is deck template. So our deck template contains of tasks. Roll, dice, six and not six. This means, so we're going to simulate rolling a dice until we get a six. This means at first we have the task, roll the dice, then we execute a task in case we got a six. And if we did not get a six, then we should rewrote the dice. So until we get to six. So this would be a recursion. So now let's define the templates. So we have the template task, roll dice. Here we just use the script template with the image Python 3.8 slim using the command python source, the pipe operator. And now I import the picket random. And then we define a variable number roof random dot rent int 1 to 6. This means random.random int 1 to 6 creates random integer between 16. And we want to print this number. So here we can add the template now. And this will be Task roll dice. And now we need a task 6 in case we get a six. So we define the template task six. And here we almost can copy it. And we are going to print who Rey a six. So here in our task 6, we use the template task 6. Then we only wanted to be executed once Task roll dice succeeded. So we can use depends. Row dies. That succeeded. And we only wanted to be executed if we got to 6. So this means we can add a condition with when. And then we say in double-quotes, open curly brackets, close curly brackets. Tasks, roll, dice, dot outputs, dot, salts. So we check what was printed here. And this should be a six. So he has, so this means that task 6 will only be executed once roll die succeeded, and we got a six. And now we want to define tasks, not six. And here we wanted to be executed. Only. Roll dice succeeded. And when actually we can copy it when the output of roll dice is not equal to six. And what do we want to do? Actually then, we want to just call our debt template again. So this means once we start the workflow, we defined our entry points. So we start here. Task roll dice will be executed if we got a three for example. So then the task mod 6 will be executed and it calls the deck template. So we go back here to the beginning and again roll dice. And then we got a six and task six executes and hooray a six. So now we can save it, close it, take our path, open the command line, and cd to our directory and are go minus N r, go submit, workflow recursion, fade to pass workflow. So here is something wrong. Converting yaml JSON in line 16. So what is going wrong here? When? Probably it is because of the space missing space. So let's try it again. Yes, that's it. So let's take a look to our UI and here we can see this dieses rolling. No six, again rolling. And let's see how lucky we are today. Not 6. Third times rolling. Again, no six. So again, we roll the dice. Here. We can even see, again no six here we can see what is the number. Here is a tree. Again naught. So what did we roll? One. So today we are really not lucky. No six at all. A2 know six. Or was it a five? Again? Five. Well, I hope it will stop soon. But here, this is the recursion until we get the six. Oh my gosh. Four. Let's wait, wait, wait. Whoa. Again. What? For? Now? Let's get the six already. Why not know that today at all? Here? Now this is for show boring. So I just pause the record until I got the six. So finally I got the six. So after how much time it took? 0 after 4.5 minutes. So today, that's not my lucky day. So here you can see it took a long way to get to six. But finally we got the six roll dice. We take a look and you can see the name of the of the workflow. Here. Finally, we got to six and here we can check it. Hooray, a six. So that's it. How to do recursion. Here we can see a really nice example of recursion for sure. So that's it. 44. Workflow functionalities - Exercise2 Task description: Welcome to the second exercise in this course. Most of the workflow functionalities that you have learned in this chapter must be used. Here is a graphical representation of the workflow you'll be creating. The goal of the workflow should be to process a CSV file with emails in such a way that an email is sent when certain keywords such as bombard encountered the e-mail CSV file, you can find in the course materials. So you can either make it available as a URL in your Cloud yourself, or you can use my source URL below to access the e-mail CSV. A work detect should be used as an argument for the workflow, which can be used to specify which word is to be detected. Furthermore, the source URL and notification email should be securely stored and used as Kubernetes secrets. The first task of the workflow is specified here, S gets source, the e-mail CSV file should be read from S3 and output as an artifact on mean IOL, the source URL, which is stored with source URL as a carbonate. The secret is to be accessed as an environment variable. The read emails tasks should then take the image CSV phi from min i o and output the content in list format with each entry as a JSON element to standard out. The task loop, then takes this output and loops over the emails so that all emails are processed in parallel by the following tasks. With a small number of e-mails, of course, doesn't make sense. However, if we introduce millions of emails that are broken down into different stacks, that makes perfect sense. For the sake of simplicity, we do it just with a few emails. Within the loop. Each image should then be checked using the Ditech task to determine whether it contains the word detect Argument. If so, then detect it should be output to standard out. And if not, then a simple, okay. And finally, a task email detected should then be executed. But only if would detect this actually contained in the email. For the sake of simplicity, we skip the part of sending the e-mail and simply print to standard out that we have sent an e-mail. The e-mail address should be used as a mounted volume by the Kubernetes, the secret notification email, and finally, the corresponding EMEA plus the sender should also be stored as an artifact on Min. If the description of the task is enough for you, you can end this video and try to implement the task immediately. However, if you would like some hints on how I will implement this task, please stay tuned. For the Git source task, I use the Ubuntu 2004 image with the given shell commands. I update the package manager apt-get, install, curl, create a folder source file, go to this folder and finally load the e-mail CSV file into this folder with curl. The environment variable source URL is created with the carbonate, the secret, and the downloaded email CSV is finally output as an artifact. For the read emails task, I use Python 3.8 slim, where I first import the JSON and SSIS packages, then read the email CSV file that were stored on the temp texts, and finally output the content as a list with JSON dump. The task loop simply creates a loop whereby the template that is called represents a deck template. For the detect task, I use the image Python 3.8 slim and check with Python whether the word detectors included in the e-mail content and out, either detected or okay with print accordingly. And as you can see here, both would detect and text what is the content of the email, our input parameters of this task. Finally, we have email detected whereby I use the image Python 3.8 slim again, and first, read the email from the attached volume of the secret notification email, output the content with print and write it in a newly created txt file. Did this ultimately saved to min iOS artifact? I think this should be enough to solve the task successfully. So I wish you good luck with your task. 45. Workflow functionalities - Exercise2 Solution: Hello and welcome to the solution of the second course exercise. Here I opened the email CSV file that we want to process. So here we can see we have two columns and one column with texts and another column with sender. And in the column texts we just have our e-mail, whatever e-mail was sent and we want to process. And here in the center column, I just wrote a, B, C, D, and so on. But in reality for sure there would be any e-mail address. So this file I want to put now to AWS S3. Therefore, I opened the AWS management console of my account. And here I go to S3. And I want to create it first. A bucket's. This bucket I call you deny sources, buckets. And I choose the AWS region, Frankfurt, US Central one. So you can choose whatever you want. And here, I don't want to block all public access. I want to have public access for my source. And this I have to acknowledge and then create buckets. So let's find it. You the my sources buckets. And now I can upload the CSV file. So let me take it, and here we have it uploads. Then I exit. Then I mark this end here under Actions I choose make public. I make it public. I can go to the file, and here we have a object URL that we can use, that we can use to curl from inside a port. So either you Use your AWS account or whatever account, whatever Cloud you want to use and move with their upload, your, the, the emails CSV file there. Or you can use just this object URL. And Mao, I want to create a Kubernetes secret using this object URL. Therefore, I go to my directory, copy the path, open, the command line window, cd in my path. And I want to do echo now using this object URL. And I want to create a source underlying URL dot TXT file. And we can see here, we created a source URL TXT file. So I opened this one and there is actually a new line created. So this we have to manually remove. I save it again. This is really important. We have to use this only. Otherwise. The URL is not the URL we want to use. And we want to create another secret or another key. And this is our notification email. So I just choose email at gmail.com. And I create the notification on the line emailed or T x T. And I opened this as well. Remove the new line that was edit, save it, close it. And now we want to Cube CTL minus N r. Go create Secret generic exercise to secret. This is the secret name. I just choose exercise to secret. And then I choose from file. And I want to create the key source on the line URL using the content of the source underlying TXT, URL dot TXT. And another key from file, notification, email. And using the notification underline e-mail dot TXT. And let's just take a look if we created it successfully with Cube CDL minus an Argo, get secrets. And here we can see our exercise to secret. So we can also do describe secret, exercise to secret. And here we can see our name, namespace and our two keys, these keys of the secret we want to use later inside our workflow. And now let's create our workflow definition. So I just opened here a minimum template of a workflow. So here we have to define the kind it is workflow. Then there is a name, I call it workflow Exercise 2. And then we have the spec. We have a entry point. I just choose it later and we have templates. The first template we want to use is a deck. I call the deck template. So this is our entry point as well, deck template. And let's define our deck with DEC tasks. And the first task is get source. And the template I'm going to create is called Get Source template. And so let's define our Git source template. I will use a script template with the image Ubuntu 200. Command is shell command throws. Here, I use the pipe operator. At first, I want to update the package manager apt-get. And then I install with minus yes. Curl because I want to curl my emails dot csv using the, using the object URL. And after this, I make a new directory called source file cd into my directory. And then I curl my source URL. And here I use an environment variable and outputted to emails dot CSV. And let's just check, do a list command. So the first thing that I want to say is here, I do apt-get, update and install curl. For sure we don't have to do this here, but we can, but it takes a bit more time then for the workflow to be executed. We also could just create another image using the Ubuntu image as base image and then use it here without these commands. But anyways, I would just wanted to show that this is possible as well. Just not so beautiful, it's C. And now here, our environment variable, source URL. We have to define this as well. We have to tell that Argo workflow. Create a environment and environment variables source URL using our secret, the secret. And this we can do with an M section. And here we say the name is source URL. This is exactly the name. We have to use it inside our script. And the value from will be a secret key, reference. And the name is RR. Exercise to secret. And the key, let's just remember here, exercise to secret is the name and our key is just source underlying URL. And now it should be possible that we can use just this URL, call it, get our emails, CSV. And what we also want to do, we want to define our outputs as artifacts. Name. We just call it emails. And the path we want to use is our source file directory did recreate tier and our emails dot CSV. And this way we just output our emails dot CSV as emails to min i o, I save it. And I can now actually do at first a dry run. So there is a possibility to do a dry run without actually executing the real workflow. It's checking then the syntax. So if there are any mistakes, arrows or whatever. So this is a good tool, Let's say a good option to two already proved or check your work for definition. So we can do it with Argo minus and Argo submit workflow exercise to that YAML. So this is the command we know. And now we have the option of minus-minus dry run, minus o yaml. And here we can see it's outputting our YAML. This means everything should be okay. So now let's run our workflow. And let's go here. So there's obviously, Let's take look. No locks are coming. So it doesn't help us even if we reload. Let's take look. Still the same. So let's go here and do Cube, CDL, myosin, Argo. Describe work floor extra, size two. Workflow. Exercise. Describe rook flow. Actually we have to do. And here we can see what is the error. So workflow, note error, error note reflexive size gets source port as invalid containers. Here we can see value from secret key reference name invalid value exercise to secret key href. For sure this is invalid. I didn't see this over here. The sleep it is here. I made the mistake. It should only be exercise to secret. Let's save it. Let's delete this one. And let's do. This summit, submit this again. And now it's a minute. And now let's wait until we get source executed. So it's running now. Let's see our logs and t, we can see now building whatever boon to downloading curl and now downloading our emails, CSV. So this looks good. Let's wait until it's finished. He is still our look here, image CSV. And here our output artifacts. Here is our image file and we can check on Min IO as well. And here we have workflow exercise to emails and mainland. So now let's do the next step and add the next task. Read emails. So here in our deck, I add just read emails. And it depends on get source succeeded. So and I at and arguments section using Artifacts. And here I want to use the artifact we output here. So I can use here, oh, I have to use the curly brackets. And here I use the name. Just text can be whatever you choose. I just choose text. And from here I want to use tasks. Get source. This is our task. Then outputs artifacts and emails. Here is exactly the name we chose here. Emails. So this we can use here as well to use it as an argument. And the template I am going to define is read emails template. So I can just copy. And here we create this template. We need an input section with our defects and tear. We name. It, takes the same we use here. In the argument section of our task, read e-mails. And here I determine a path where I want to save my artifact. So I just choose temp texts. And now I can define a script. I want to use the image Python 3.8 slim using a Python command or several Python commands with source on the pipe operator. I at first import, import JSON, then I import sys. And we've opened, I, opened our artifact in read mode, SF. So this directory here, this path is to say we chose here, because here we use this artifact. This will be the emails CSV as inputs. We put here to this path and then here in our script, reopen it here. And now I want to read this just as lines with f dot, read lines. For sure there are other race to read our image CSV using pandas or whatever, reading as a CSV file. But I just use open. And for me this is fine. So I read line by line, let's say. And at first now I want to clean lines a bit and also split our columns. And therefore, I create a list comprehension with a for loop for x in lines. So I loop over all our lines. And for each line, for each e-mail, Let's say I wanted to strip so I remove spaces in the beginning and in the end. And then I want to split by semicolon because this is our separator. And finally, I want to output it with JSON.com to sys.path standard out. And here I use a list comprehension again. And I iterate for i lines. And I only want to iterate over the second line until the end because the first line contains the header. So this art don't want to use now here. So I just can iterate from the first element to the end. And here I use a curly bracket, open and close. And here, at first I want to write the header. So this will be lines and the elements Ciro. And again, then I want the element Ciro. Actually the first element is the first column, so this should be text. And then colon, and then I 0. And this is the e-mail, the content of the email, the texts. And this I want to strip with the double-quotes. So in the beginning and the end of each email, there are these double-quotes, so I don't want to use. So I remove this strip. And then there comes the second column with lines 0. At first I want to write the header. And this is the first element here. Actually the second element, colon and i, 1. So this should be good. Now, let's try now to submit this workflow. So it first, let's do a dry run. This looks good. And let's see, at first we have to delete our workflow because we didn't rename it. And now we can submit it again. So let's take a look and let's just wait. We can see our logs. What it is doing. Yeah, it's upgrading. And here downloading our emails, CSV. And now we should see our read e-mails. And here we've got an error. And here we can see in our image, pull back off back of pulling image PT ion. Yes, this is just a typo, unfortunately. So I delete this again. And here, I definitely have to use Python. Save it and submit the workflow again. Again, we have to wait. But well, so CRO locks downloading email, CSV is downloaded. And now let's see our read emails. It's pending. So what's going on? Let's check our logs. Again, a problem. And now succeeded. Yeah, This took a while, probably there was a problem with my mini cube cluster. So now let's see it in our logs. And here we can see that this should be what we expect. Our output at first we have here in JSON format, we have our text, comes our email, and then we have our sender. Here is already d, So there somewhere, here is B. So this is what we expected. So this we can delete again. And let's move on. Now let's define our loop task here as the third task of our deck template. And here we just name it loop. And it depends on read. Emails. Succeeded. The template we want to create this loop template. And there are arguments. We want to use parameters. So there is the first column of text with the value. We just skip it at first. And we have sender with the value. We skip as well. And we want to loop. We want to create a loop over the lines of our email. So what we can do, we can use with params and then we just use our output of emails. So this we can write here. Thus read emails though, outputs dot, result. And now we can use here curly bracket open two times, curly brackets close. And here we can use item dot text. And for our sender, we can use item taught sender. We can use it this way because here we outputted this way. So now let's create our loop template. Loop template. And here we have a input section per m meters. Meters. Name is text. And here we have sender. And here we use a deck as well. Because for each line of our emits dot CSV, we want to trigger two tasks. So here we use deg tasks. And here our first task is detect. With a template, detect. A template that we will define after. We use arguments. They are parameters. And here we use text with the value from inputs, parameters, texts. So, and we want a number parameter sender, sender. And now we can define our detect template. Here we use inputs as well, parameters. I can actually just copy and paste it. I use a script template with the image Python 3.8 slim. Again, this time, it seems correct. Command python, source, pipe operator. And here I want to print detected if bomb is in our text. So at first, I hard code the word we want to detect. So I just write bomb in inputs parameters, the text, it's using it from here. So if bomb in inputs parameters, texts, then print, detected. Else, print, just. Okay. Let's save it. Let's do a dry run. Looks good. Let's submit it. And now we have to wait again. In the meanwhile, we can do one thing or I show you what we are going to do next. Once this workflow here succeeded. And it behaves as expected, then we don't want to hard-code this word here, bomb, but we want to use it as argument actually here for the workflow. So we want to use whatever word we determined as argument, this should be detected finally here. So this will be the next step. So let's see readymades and here we can see our loop. Some of them already succeeded. Now finally succeeded. Let's see our logs and here we can see, Oh, okay, Let's see this. And here it is detected. So this seems good. Let's delete it for the next time. And Here let's continue. In order to use it as in argument at first here, in our detect template, we have to add a parameter word detect. And then here, actually, instead of BOM, we can use this detect parameter worth detect that mean would detect. So now we are using it. So as a next step, we have two included in our loop template as well. So here as input parameter, and then here as well as arguments. And the value, we can just copy and paste from here. It's the same. And Then as a next step, loop template is called here in our loop task. And here we have to edit as well at first as input. And this we do here. So we have to add an input section and parameters. Name would detect. And then the year we have to edit as well. Name would detect. And then you can use it from here. This is always the same. It's here, inputs, parameters would detect. And finally, we have to add an argument section here on the same level as entry points. So here we right arguments, parameters, the word detect, and here we define a default value. So we choose bombed. And now we should be able to even called workflow or submit the workflow using another argument because we introduce this argument section and everything year till the detect template. So let's do a dry run at first. This looks good, and now submit. And now it should actually execute in the exact same way as before. Let's just wait. So let's see, locks, email CSV. So now it should read our emails spending executed. Now it's faster than before. Now we're just doing the loop detecting. Let's wait. And succeeded. Let's see here. It's exactly the same. Detected here. Oh, detect as well. Let's see another one. My years. Okay. Well, now we can delete this one again. And now let's come to the last step. It's the last task we have to add to our loop template with the name email. Pick the template email detected, depending on depict, succeeded. And we only one T2 executed. If here in our detect task, there is printed detected. If there's no detected, we don't want to execute it actually. And this we can do with when and specifying a condition. And the condition is, in our case, tasks detect, output, result should be equal detected. And now we can write the template we want to use e-mail detected template. So this we have to create and we use arguments, parameters. And as always all already here, we use not worth detect only text and sender. And now we can define our EMEA detected template. The template. And we have inputs, parameters, name, texts, name sender. Then comes our scripts, our script template, using the image Python, 3.8 slim as well. Command. We want to use Python and our sauce with the pipe operator. And at first, now we want to actually know where to send the email. So we will not send an e-mail here. We just print out What did we sent an email, but we have to know the e-mail address. So we said that we want to use the Kubernetes, the secret, the key notification email, and we want to mount it as volume. So let's add first, include our key of our qubit into secret as volume. And this we can do here on the same level as entry point and arguments and so on. We can just at a volume section with the name, whatever name we choose here, I just choose the same name as the secret exercise to secret. And then we have to specify the secret name. It is Exercise 2, secret. And then we can do here volume mounts. And this will be here on the same level as source inside our email detect the template. So we can do Volume Mounts, name exercise to secret. Here we have to use the same name we named the buff volume. And here we mount path will be just Secrets. And now this path we can just use and open inside our script. And we do with open. Actually here. Secrets. Note if the cation e-mail because we only need our notification email here as mounted volume. And we want to read it. And then as f. And then I define a variable notification email with F dot. And this we want to print out. We can just write, send notification, email too. Then we can use our notification emia. This is the variable we created here. And then with the content. And then we can print sender and use R0 parameter here our input parameter with inputs to parameters dot text. And the same we want to print four. I'll know here is sender. And this should be. Text. So we printed to standard out. And then we also want to create a file with the content of the email and output it as artifact to mean i 0. So for this, I do f equals open. And I choose a path with Tim slush email. Take the top TXT, and I open it in write mode. And then we can write. And here we can copy this and paste it just here we have to change this, our double-quotes so should be only in one, let's say in one line. And the same we do for our text. And finally, we want to close it, our file. For sure. It's not the most beautiful way, let's say for programming. But yeah, it's okay for now, it's just a script. So we're talking about our workflows. And we want to output it so we have to add the output section. Outputs are defects. With the name email detected. I just choose this name. And the path here. This one we have to use. And that's it. Hopefully it works. So we remove the last workflow. Let's do a dry run. And there is a problem. So what is written here? Submit workflow, templates, loop template tasks, E-mail detected, failed to resolve task, detect output results. So let's take a look here in Luke template tasks, E-mail detected. Our loop template. We have here tasks. You may detect it, and here it should be outputs. I save it. Let's do it again. There's still a problem. So he attacks, detect outputs. So there's a typo I even see. So here, outputs. Hopefully now it works. Let's try it again. And now it looks good. Let's submit and weight. Gets source. It's installing, updating and installing. Let's wait for our emailed CSV. Yes. Good source. No. Emails. Just reading our emails CSV. And here we have our loops. And it's detecting, Let's see, sending some emails. And here we can see that. For example, here, let's check our locks. There's OK. This means it skips the task. Email detected. And only where it detected here, detected the bomb word, it sent the email. And let's see our email detected our logs. Here we can see send notification emit to email at gmail.com. This is the content of our Kubernetes secret. Then there is descender and texts and also should outputted here to min i o with email detected. And that's it basically that's everything about the task is it took some while for sure. That took some time, but yeah, we exercised several of the concepts and functionalities we learned in this chapter. So thank you for your attention and see you soon. 46. More Concepts - Resource overview: Hello. So far we have only dealt with the Kubernetes resource workflow. However, there are other types of resources that we can use an aqua workflows. We have a brief overview of this here. We already know the workflow. This is so to speak. The basic building block and Argo workflows. Workflows are executable sequences of process steps that are executed immediately when they are created. Each workflow consists of one or usually several templates and has a unique identifier. Then there are workflow templates. These are definitions and workflow. Each submit generates a new workflow with a corresponding unique identifier that executes exactly what is defined in the workflow template. Workflow templates can also be called by other resources. The third resource type are Cron workflows. These are quite simply workflows with a set schedule. And cluster workflow templates are cluster scoped workflow templates that can access all cluster namespaces. In contrast to the workflow templates, where only resources within the specified namespace can be accessed. I think that's enough as an overview. Let's get our hands dirty again. 47. More Concepts - Workflow Template: Hello. Now let's take a look how to deal with workflow templates. Here I opened the workflow we defined using a deck template. It consists of four dependent tasks and it's just easy to change it to workflow template, we just have to change the kind of the resource. So here, right now it's a kind workflow and we have to change it to workflow template. And what I do is just to rename it. So here I just renamed it to workflow template deck, and that's it. Now we have a workflow template. So now let's do one thing. We, just, as always, we copy the path of our directory. I open the command line window cd. There. Here we have our workflow template, YAML. And now I create a workflow template with the Argo CLI. So as always, Argo minus m, Argo. And then what? We have to use this template. And then we want to execute the command Create. And then we say, okay here this YAML. So before when we didn't use any thing else, just submit or whatever command. So Argo CLI, most, it's about workflows. Now we deal with templates, with Workflow Templates. So we have to use template and then we want to create it. And here we go. So here we can see namespace, Argo, the name it was created. And now we have the possibility here to even see what templates we have with template list. Here we can see we have one template or workflow template deck that we just defined. And we can see also what workflows we do have. So no workflows found. And now we can submit this workflow template and it creates a new workflow according to definition. So with algo minus NR, go submit. And then we use from. And here we have to say workflow template and then the name of the template. So this is our name workflow template deck. So here we want to submit a workflow. That's why we don't use template here, but then we specify here with the argument from that it's a workflow template we want to submit. So let's go. And if we list our workflows now, we should see this. From here we can see workflow template. I'll know I listed the template, sorry. So here this is our workflow template deck. So here it created a unique identifier, a new name with five alphanumeric characters. So it's running. Let's take a look to our Argo UI. Here we can see it. It just finished. This is our deck as we know it. And we can see here on the left side our workflow templates. So here we see our template. We can take a look, we can see it, we can edit it here even. And here we can submit it. So we have the possibility to submit it through our ego CLI or here, just through our Argo server UI. So here we can submit it, submit. And now it created a new workflow. So why to use Workflow Templates? It makes sense. We can have several templates and out of this templates, we can just submit whatever we want, end whenever we want it. And it will always create a new instance, a new workflow with a unique identifier. Here we can just see it. Now we want to change our template. Let's say OK, here we just want to change something task executed changed. So whatever we want to write here, it doesn't matter. So we change it. And now we want to update this workflow template, true ago CLI. So we go here. And when we want to update through Azure CLI, at first we have to delete the workflow template we have. So with algo minus n, Go template, delete, and then the name of the template workflow template deck. So it is deleted. If we do template list, we can see there's no template anymore. And now we just do Argo minus N, r Go template create. And then our Yammer. And if we do minus arguments and Argo template list, we can see here is our workflow template. If you don't want to delete and created again, we have the possibility to update it through Cube CTL. So let's change it again, change again. And I save it. And now we can do Cube CDL minus an Argo just to apply minus f. And then our workflow template, YAML. And here we can see there's a warning. Well, but anyways, it updated that. So let's take look to our UX UI to workflow templates, workflow template back. And when we scroll down here, here we can see the updated version. And this is one opportunity to see it, but we can also see true the algos CLI. We can type the command minus and ago template and then get workflow template deck. This is the name minus 0 and then yum. And then we can see the workflow definition here. So there are always several opportunities to manipulate to work with workflow templates for August Selye Cube, CDL, or true here, true. Api calls using UX UI server. Yes. 48. More Concepts - Cron Workflow: Hello, Let's take our workflow template we defined in the last lesson and change it to a workflow. The first thing we have to do is to change the kind of the resource here. So here we change workflow template to Chrome workflow. And the name I change as well. Just I name it Quan workflow deck. And then we come to the spec section. And there the first thing we have to do is to define the schedule. And this is, as the name says, a Cron schedule. So here let's take a look to Chrome. We have five stars, five positions we can change and set. And here the first is about minutes, the second, our third day of the month. The forth about a month. And the fifth is about the day of the week. So let's imagine we want to set the schedule every they add to 2AM. So then we do like this. Here. We write a 0, then a two, and then we keep the stars. The stars means each or every. And this means that every day at 2AM, this cron workflow will be submitted or scheduled. Another argument we can set is concurrency, policy, cone, currency. See here we can set it to allow for bid or replace. This means so how the policy, how the scheduler should deal with concurrent workflows. So let's imagine, okay, we set the schedule to every minute. So, but the whole workflow takes time, let's say 1.5 minutes. So how to deal with when the last scheduled workflow is still running. But then the new one is already submitted. So how to deal with it? So we just say for our case, it doesn't matter actually because it's every day once rerun the workflow. And so anyways, we don't want to have concurrent workflows because we have one source, let's say, or one database or one file. So does should not be touched simultaneously by several workflows. And a number argument we can set is starting deadlines seconds. For our case, it is also not this important, but let's just imagine we have the schedule every minute as well. And let's say, Well, we want to submit it. It's maybe two AM and then 2AM and one-minute. We want to submit or the scheduler should submit a new workflow, but five seconds before it just crashes. So there's a problem with the scheduler, and it just comes back maybe five seconds after. So 2AM, one minute and 50 seconds. And if we set the starting deadline seconds to, for example, 70, so at least more than 65 seconds because the difference from the last workflow that was successfully submitted was 65 seconds. So if we set a larger, more than 65 seconds, for example, year 70 seconds, then during this time, whenever there's a problem with the scheduler or with the Chromebook for controller. So it will be submitted once it comes back to work, let's say. And but with the condition that it's less inside this 70 seconds here, the starting deadline seconds. So as I said for our case, it is not important because rerun it once once a day. But yeah, we also could set these deadlines seconds to. Yeah, one day and more. But yeah. I just set it to 70 just just to set it to. And now we have to define our actual workflow. And there we have to use a new section called workflow spec. And everything we want to define, we have to indent now so we can press Alt and Shift and choose everything we want. And now let's use two spaces. And that's it. We can save it. And we can go to our directory. I copy the directory, open the command line CD. And there we have our cron workflow deck yaml. And now we create this chrome workflow using Argo CLI with Argo minus and go. And then we have to use Chrome and Create and then our YAML. And here we can see our definition, our schedule and so on. And the same as with templates, we can list our Chromebook flows with our girl minus m or Google Chrome list. Here we can see it. And now let's also submit this corner workflow because the schedule will only submit at 2AM and new workflow. So let's do it manually. And here we use minus n, Argo submit, minus-minus from, and then cron workflow slash, chrome rope flow deck. This is our name. S, we can see here. And let's just go. And this is our new workflow. We can go to our Argo UI. Here we can see our cron workflow deck. So it's just, it's just running. And let's take a look to here, to our quandary workflows. Here we can see all our cron, workflows. And we also can submit a true Argo you. So it's just running. And here resubmit a new instance of the corn rope floor. So a new workflow is created. And we have the opportunity when we go to our quandary workflow to suspend this workflow. So this means if we don't want to run it according to the schedule anymore, we can just suspend everything. So now it will not run it 2AM, it will not execute anything. But we also can resume it if we wanted one month later or whenever. To resume it, then it will run again. And Q we can see it's suspended in the argument. Right now, it's faults. We suspended it's true, and it's false. We also can suspend and resume through our grew CLI. We just use our Argo crone suspend and then our cron workflow name with corn workflow deck. And here we can see it's suspended, suspended faults. Let's just updated. It's now true. And we can resume it as well with minus, minus an Argo crone resume and the name Chromebook flow deck. It's resumed. Now it's faults. And let's imagine we want to update it. Whatever we change here, I just add the space, I save it. And it's the same as with the workflow templates. We can update it through Argos CLI, the leading end, creating. Or we can just use Cube CDL. This is what I do now here. Cube CTL minus m, r go apply minus f. Cron brook flowed deck Yammer. And here we can see this warning. And let's take a look here, our corn workflow. And here we have the new space. So this is the updated version. Yes, that's it. The Bao Quan workflows. 49. More Concepts - Cluster Workflow Template: Hello. Now let's create a cluster workflow template. So he, I just opened the workflow template we defined. And the kind of resource we have to change to cluster workflow template. And the're the name I also call it cluster workflow template deck. And that's it. Now we have a cluster workflow template definition. So let's create our cluster workflow template. So I go to my directory, I copy the path, open the command line, and cd to my directory. And here we have the cluster workflow template definition we just defined. And here I also have my workflow template. So I just want to show you what is the difference. So let's do our Argo cluster template and create cluster workflow template deck. And let's also create the workflow template. I'll go minus and Argo template, create template. And now we can just list our workflow templates. So Argo minus NR, go cluster, cluster, template and list. So here we can see our cluster workflow template. And I listed all clustered templates inside the namespace R group. Now let's skip the namespace Argo. Let's just do Argo cluster template list. So now we are listing all our cluster templates inside the default namespace. And they're still our cluster workflow template. And in comparison to the template, let's take look to our template. I'll go minus N, r Go template list. Here we have our workflow template inside our namespace, our group. And let's try our goal template list. Here is note template. So this is the difference. With a cluster workflow template. You have access to all namespaces, to resources in all namespaces. And using a template you just have x is to the namespace you specified. So here when we created the Argo workflow, the workflow template, we said this should be created inside the namespace are Gou. So anyways, let's go to our UX UI. And here we have cluster workflow templates. Here's our cluster workflow template. It's the same as with workflow templates. We can just submit. It does exactly the same. The only difference is the namespace. So x is to the namespace. So yes, here we have our workflow template, cluster workflow template, the definition. And yes, that's it. The bulk cluster workflow templates. 50. More Concepts - Reference to Workflow Templates: Hello. In this lesson, I want to show you how to reference to Workflow Templates. Therefore, I opened the workflow template deck we defined in the past. And this template I want to create, and then I want to create a work crop ROHC flow that is referencing to this workflow template. So at first, let's define our Chrome, Chrome rock flow. The kind of resource is crone rook flow. Then we have to define the name. I just call cron workflow. And then the most important thing, the spec, at first, let's define the schedule. I just want to have it every minute to be run. And our currency pole see, I set to four bits. So I only one to have one workflow at the time. And then starting that line seconds, I set to 75. This means even if the scheduler crushes or has a problem till 75 seconds after the last workflow run or was scheduled. So the next workflow anyways will be scheduled. And the workflow spec, work Flow spec here. The only thing we have to define here, because we want to reference to our workflow template. We can use workflow template ref. And here we say, Okay, what name, what workflow we want to reference. And here we use this name. And that's it. This means that whenever this Quan rock flow will be submitted, it takes the workflow template here from workflow template deck and puts it more or less here, combines it and runs it as one workflow. So let's deploy everything. Let's go to my folder. Here. I open the command line, CD and Mao at first, let's create the workflow template with our go minus m Argo template, create workflow template. So for sure before we can reference to a workflow template, we have to create this workflow template in our cluster. So and now our go minus n, Argo crone, create a cron workflow. Yum. Oh, there's a problem. Let's starting deadline seconds. It's it's an unknown field, so I guess there's just a typo. Yes. That line. And now that's the wood again. Now it is created. So let's go to our Argo. You, we can see the Chromebook flow and we can see the workflow template. So since the schedule is 1 every one minute, every one minute, every minute. So let's just wait until a new instance is submitted. It should be soon. I mean, we also can submit it manually. Let's just wait this time that it's really scheduled. Yes, here you can see a new instance called Quan workflow. And here you can see several actually numbers to have a unique identifier. And here we see it's just executing the definition of the workflow template as if it was one Crohn workflow as if we defined it immediately. They're inside our Chromebook flow definition. So this is the way how to reference workflow templates. 51. More Concepts - Creating a master workflow: Hello. In this lesson, I want to show you how to create a master workflow. This master workflow should create and submit to workflows using two different workflow templates. Here I opened already our workflow template deck that we already defined. This we will use and we use the workflow loop we defined in the lesson about loops. And I just change it to workflow template. And the name I also change to workflow template loop. And now we will create these Workflow Templates, not now, but a bit later. And then we can just use these workflow templates in our master workflow. Our master workflow will be a corn workflow. So let's start defining. It's, the kind of resource is a cron workflow as just mentioned. And the name I just call row cron rope flow master. And then we have the spec, we define our schedule. I just say that every hour it should be scheduled. So here I type is 0, then star, star, star, star concurrency. Police. I use replace. Although it shouldn't matter because there's one hour time between each scheduled workflow and our workflow will not take one hour to execute. And starting that line seconds, I just set to 0. And here comes our workflow spec. And there we define the entry point and our templates. And here we will have one deck master and one trigger workflow template. As the, as the entry point we used the deck master. And let's first define our trigger workflow template. So here we need inputs as parameters, and we name it workflow template. So here we want to use as an input the name of the workflow template. Be here, workflow template loops and deck to be created. And as a template we use the resource template because we want to create workflows. Action will be create. And to manifest. We use the pipe operator. And then our API version. Yeah, this we already know from the lesson about resource, about the resource template and the kind we want to create a workflow Meta data. So here we use Generate name and we use our input parameter to create this name. We use inputs, parameters, workflow, template. And here we type a minus. And yes, here we use whatever template we submit as argument, as input, we use this name. And here it will generate a workflow name. And the spec is, as we know from the last lesson, we use workflow template ref. So we reference to our templates. And here we use exactly the same. We just use our template. And one more or two more important things we have to consider is that whenever we create here this new workflow, we or the master workflow, should know about the status. So it should know about it succeeded, fared, arrowed or whatever. Because here we want to define a deck template and also to specify some dependencies. So therefore, we have to know the status we can do with success. Condition. Status face equal to 6 seeded and failure condition status face in failed on error. So in this way we know when it succeeded or failed or arrows. So now let's define our deck master. And there we have deck tasks. And our first task will be Task deck. Using our deck template. Not exactly are here in template. We use the trigger workflow for sure. But as an argument, we will use the template. But here it calls this trigger workflow template. And as arguments parameters name. We want to set our input parameter workflow template. And the value is here, our workflow template name. So workflow template deck, we can just copy and pasted. And the same basically with the next step. Or we just know we will not copy paste it. By the way, here, we need a space. I just mess it up everywhere we need to space. And now let's define the next task. So this is task loop. And it depends on task deck succeeded. So in this way, here, we can use The depends logic with task deck succeeded because here we defined success, success, and failure condition. Actually we defined success conditions, so that's where we can use it. But if we define failure condition, we can use other conditions. We will not use it right now, but just for the future. So and the template we use trigger workflow. Arguments, parameters, name, workflow, template. And value will be our workflow template loop, this name. And that's it. So now we can go to our directory, open the command line CD. Here we have all our YAML. At first, I create the templates, the workflow templates with our goal, our goal template create workflow template deck, then loop. And finally, the Argo minus N r girl, Chrome, Chrome workflow master is here. I forgot the create statement. And now we can check it here. Our cron workflow, we're fear. Here. We have our two Workflow Templates. And now let's just trigger our workflow manually. So let's do submit. And here we can see, here is our Chromebook flow, master workflow. And you can see our task deck. And task deck created a new workflow using the workflow template deck. So here it's executing our workflow template deck as we know it. And once it's finished, it submits. The signal, just tells Flow the Cron workflow master. Okay. Task deck succeeded. Let's wait. Yes, here. And now it starts task loop. And this should create a new workflow using the workflow template loop here. So here it's doing the loop. And once this finished, it tilts the master and it's just waiting for the signal. And finally, the master also finished. So here we just created a master and this master workflow called or created new workflows using the workflow templates. So this is a nice way to create a master, create a set of Workflow Templates, whatever we need, and then we specify according to schedule or whatever, to just orchestrate or yeah, coordinate our workflow templates. So and there's also difference to just what we did in the last lesson where we reference to a workflow template. But here you could see that in this case we created really new workflows. And not as in the last lesson, we just created one workflow, would just use the template, put it in one workflow. So this is the difference. 52. More Concepts - AWS S3 as artifact repo: Hello. In this lesson, I will show you how to use AWS S3 as artifact repository. Here I open my AWS management console and let's create a bucket on S3. So I go to S3. And here I hit Create bucket, and I just call it R Go course bucket. Wherever you choose your region. Just remember what region you choose. So I choose in Frankfurt, EU, EU Central one. And I create the bucket. Yes, and here we have our new bucket. The next step is to create secrets for our AWS credentials. So let's go to my folder and I copy the path, open the command line CD to it. And here we have our workflow artifact as three Yammer. I will talk about that later. But at first we want to create secrets for our AWS credentials so I can do it with a CO. And here I write, right now, I don't write my credentials. So you will type your credentials. I just don't want to show it. So I already prepared it, but yeah, I just put three times x or whatever. And then I create AWS underline access key ID dot TXT. Actually we don't need to space here. And we create for our AWS Access Key, one text file. And we create another for our secret key, AWS, secret access key dot TXT. And here you can see our newly created text files with our credentials. Actually, you could really, your credentials, as I said, should be here. So I just delete this and take the ones who my real credentials. And now we create the secret. So let's use Cube CTL minus n, our goal. We want to create a secret inside the namespace, Argo. Create Secret, generic, and I call it AWS credentials test. And then from file, I want to create a key name access key ID. And I want to use AWS Access Key ID to t X t. And for the secret access key, I use AWS, secret access key dot TXT. And let's take a look. With Cube CTL minus m are go get secrets. Here we have our created Secrets, AWS, credentials, tests. And we also can do cubes, the DMS and describe secret, and then use the name of the secret. And here we can see our name, namespace and our two keys, and the value is not shown. Yes, here we have our two keys that we should use, a access key, ID and secret access key. And now we go to our workflow definition. And here I just used the workflow artifact we defined in the lesson about Artifacts. And here I just rename it workflow artifact as tree. And basically everything stays as it is. The only thing we want to change is here in Task 3. In task 3, we call our template task output artifact and tear we write a file out to output params dot TXT. We use this file here and save it as artifact we outputted as our defect. And here, actually, if we don't specify what artifact repository to use, it uses the default one that is specified inside the workflow controller conflict map. So, but here, now we want to say that we use S3 to be saved and then we only have to add the section is tree. And here we define the endpoint as three dot. Amazon. Aws. Come the bucket name. It's the bucket, Argo course. Bucket. Then our key. Here, we define the key, debt should be saved. And we just call it output params. And what we should know or what we should pay attention is that Argo workflows outputs artifacts as tar.gz files. So just to get not confused, We also save it as G zip file with the ending. And then we define our X is key secret that should be used and we want to use the key of our secret recreated. So the name of the secret is AWS credentials test. And the key we want to use is X is key ID. And then we define the secret key, secret with the name AWS, credentials test and the key secret access key. And then we specify the region. In my case, is it's EU Central 1. So that's it basically. So we Need Credentials and therefore we have to use for the x's key. It's x is key secret. And for the secret key and secret key secret. And here we have our secrets. Our secret with the name AWS credentials. Test. This is what we use here. And our keys access key ID and secret access key Exactly. This is what we have to use here, here and here. So that's it. Let's submit the workflow. So let's close this. I go here. No, I don't have to go here. I'm already here. So let's just do go minus m Argos submit workflow artifact as tree Yammer. And let's take look to our workflow. And it's just running. And here in task 3, we see an output artifacts that we have the main locks and the artifact out. So this is our output params tick the TXT file. And yes, let's take a look if it is saved here in our S3 bucket. So here we can see it. There is our output parameters, params file as a tar.gz. And I just want to download it. I opened the Downloads section. And here I extracted. I go inside, there's a tar file and I extracted again. And here we have our output params text file. And here we can see what should be there. So that's it about saving artifacts to S3. 53. More Concepts - AWS S3 as default repo: Hello. Now let's use our S3 bucket we created in the last lesson as default repository for logging and artifacts. So the only thing we have to do, we have to edit the workflow controller conflict meant. So we do Cube CDL minus NR, go, edit, config map, rook flow controller, config map. And here the conflict map opened. And here you can see in the section data artifact repository, archive logs are true and S3 here is defined what repos story to be used. So this weekend just change. We want to use our bucket. It's Argo cross bucket. Then the end point is as x3 dot amazon AWS.com. This we don't need at all our x's key secret. We use AWS credentials. Test. The same we use for our secret key, secret the name. And as a key for our x's key secret, we use access key, ID and secret access key, and that's it. We save it, close it. And then we go here to our folder, or at least I go to my folder. And here I just took the workflow artifact template YAML we defined in the lesson about artifacts, and I just use it as it is. Let's take a closer look. It's just here as it is. And in the last lesson we added here in the section artifacts, the connection to S3. But this is not needed anymore because we set it to default, to the default repository so we can leave it as it is and just submit the workflow with our goal minus an Argo, submit workflow artifact template. And let's take a look in our UI. And here our workflow is executing and is task 1. Let's wait for task two and task three. And here we can see in output artifacts, there is the main logs and artifact out. So both should be safe now to our S3 bucket. So let's take a look. This is our Argo course bucket. And let's, they did. So here we can see workflow artifact. And here we can see the workflow, the logs and the artifacts saved in our bucket. And here we see the port names. We can check a tier 4 task 1, what is the port name? It's the last numbers, 1909. So here we can see our main logs and one of them should contain the artifact out as well. So main log n artifact out. So it's just this easy to use Amazon S3 as default repos story for the logging and artifacts. 54. More Concepts - Archiving workflows: Hello. In this lesson, I want to show you how archiving of workflow works. Therefore, I just take the workflow artifact template YAML, we defined in the lesson about artefacts and submit it. And with Argo minus n, Argo, submit workflow artifact template. And then let's take a look to our UI. And here we can see the workflow is running. And here we have archived workflows. Right now. There's no archived workflow. So let's wait until this is finished. And then there should be this workflow as an archive workflow. So this finished and let's take a look here. We can see this is now archived. Let's resubmit this one. And it creates another workflow. And this will be added to archived workflows as well. So this is an option actually to archive all workflows that run. The workflows are archived in a secret database. So now this second one succeeded. So it is here as well in archive workflows. So let's take a look where this saved exactly. So when we installed Argo, a postscript CQL database was created as well as a deployment. So let's just check Cube CTL minus N r. Go get pots. And here we can see there is a Postgres port. So here basically is our postscript CQL database that in case we set the archiving to true. It saved here all our workflows. So let's do one thing. Let's Cube CTL minus n, Our goal, get service. And here we can see our Postgres and what port it is using. And in order to make it open for us, we have to use this port and we have to do a port forward with this port. So Cube, CTL minus and Argo port for what? Deployment. And we use Postgres and on my local host on my machine, the port 5, 4, 3, 2 is not open to use, is not free to use. So I just use 30,401. You can use whatever port you want. And then we take this one. Well, and now I open PG Admin. So this is the program to use in order to access our Postgres secret database. So here we can create a server. And we just determined a name. I just call it Our go archive. And here the connection, the host is local, host port is 3401. The username is Postgres, and the password actually is just password. So the username and the password is just provided when we installed this. And now we can save it. And here you can see it got excess. To our database, our archive database, Postgres, our schemas. And even here, we can see the tables. And here we have argo archived workflows. So let's just take a look to our data with view data. And here we can see our two workflows. So let's just resubmit once again. So then after this actually we should see it here and reshoot see it here. So basically it's like this, that whatever is here. So we can see actually everything we need about workflow. There's even the total workflow spec, the definition in JSON format. So there's the name, there's UID, the phase, so the status, it succeeded, namespace and so on. So the Argo UI, what is shown here, it just accesses the PostgreSQL database and queries what is inside the table and just shows it here. So let's just take a look. Okay, our third workflow succeeded. And here, if we just reload the side and go to our server again, and let's take a look to our archived workflows. And there we have the third workflow as well. So where we can actually set everything up. Let's open the command line again, another one. And let's do Cube, CTL minus n, edit, config, map, work flow controller conflict map. So here our conflict map or workflow controller conflict map opened. And let's go to the lecture and persistence. Here we have the option archive. And if we set it to true, then it is archiving workflows. If we set false, it does not archived the workflows. And here we see where to archive the workflow. So in our case, we use a Postgres CQL database and everything was done during installation or so. This is the pore, this is the database table name. And Argo Postgres config the secrets we have to use. So in case we want to change something, we have to change it here. So that's it about archiving workflows. I think that's enough to know about to be able to set whatever we want to set. 55. More Concepts - Namespace: Hello. This lesson is about the namespace we use for our workflows. So let's just use our workflow artifact template. So till now we all the time deployed or submitted it through our go minus m, submit workflow artifact template Yammer. And here we immediately, immediately can see that it is submitted in the namespace. Argue because here we have minus n. I determined the namespace Argo. So what happens if I do go submit workflow artifact template? So without actually determining the namespace. So here we can see that it submits the workflow inside the namespace default. Why is it actually important? Let's go to our Argo UI here we can see one workflow we just submitted. And let's just delete this one. And now we actually don't have any workflows here anymore. Let's take a look. But here with Cube CDL or no, with our Go list. Here still our workflow artifact. So if we do Argo minus n list, no, Argo minus n Argo list inside our namespace are go. There's no workflow. But in sight, our default namespace, if we don't determine the namespace, if we do our glyphs, they are still our workflow artifact. But here, it has a problem, probably with permissions and so on. Our Argo server, everything is deployed in the namespace, our goal, so we, our workflows, our workflow templates or corn workflows. We also have to deploy to submit to create inside the namespace, our group. So let's just go delete this workflow, our defect. And yes, so one possibility we have when we submit a workflow through the Argo CLI, we can determine the namespace as we did all the time, which Argo minus NR. But there's another opportunity so we can, when we open this one, we can define it here in our YAML, in our metadata. We can just write namespace, our goal, save it. And let's see what happens now. And now we are able to just do our go submit. And here we can see because it is in our spec, in our definition, it is deployed in the namespace R group. That's it about the namespace. It is important to know. Yeah, it's not much to know, but important. 56. More Concepts - Service Account: Hello. In this lesson, I want to talk about the role service accounts play him Argo workflows. If our Gogh wants to access features such as artifacts, output secrets, it has to communicate with the Kubernetes resources via the covenant is API. Argo uses a service account to authenticate itself when communicating with the covenant, this API, and which permissions this service account has ultimately depends on roads are bound to it. Let's take our workflow artifact template and submit it. So what we can see here, because we did not define any service account to be used. It uses the default service account. But there might be a problem. The problem that can occur is that there might be lack of permissions, so not sufficient permissions to do what you want to do or what to work, workflow needs to do. To avoid this, we can use service accounts and we can explicitly tell which service account to be used. With Cube CDL, we can check which service accounts we do have. So let's do Cube CTL minus n Argo get service account. So we check what service accounts are available inside the Argo namespace. So here we do have argo, Argo server and the default and github.com service accounts. So by default we have the default service account. The others were created during installation of our goal. So what we can use is the Argo service account. So now we want to know which permissions do we have. So at first we can do one thing we do again, QC, DMAs and Argo get role bindings. So here we can see Argo binding role, our girl. And now let's take a closer look with keeps. See them as an Argo describe role binding. Binding. And here we can see that there is Subjects, service account, our group. And we can see Rich Roll it uses. So here we can see that this role binding actually binds the role, our role to the service account, our goal. So now we know which row that uses. And let's see get loads. So here we can see all roads that are currently available inside our Argo know namespace. And we want to know a bit more about Argo role. So we can do Cube CTL mice and Argo, describe role, our role. And here we see all permissions that this our girl can do. And finally, RR service, it can account, Argo can execute all operations, everything that gives these permissions. And yes, that's it basically here we know which permissions we do have with the service account. Our goal. If we need more permissions, we can add these permissions to this role, or we can add other roads to the service account. Or we even can create our own service account or own roll, our own row binding according to our needs. So end. Once we have our service account we want to use, we have two opportunities to use it with our workflow. The first opportunity, we can just give it as an argument when we submit the workflows. So we can do Argo minus-minus service account. And then we say we want our go and submit. But yeah, before submitting again, we have to delete it actually. Go list. No, Argo minus n, Our goal list, we have our workflow, artefact, Argo minus n. Go delete workflow, our defect. And now we can do our go minus-minus service account, Argos submit workflow artifact template. And now we can see here the service account this workflow uses is our goal. The second possibility we can do, let's at first delete our workflow again, our goal minus and delete workflow defect. So the second thing we can do, we open here our spec and here inside the spec section. On the same level as entry point in arguments and templates, we can use service account name, and then we write the name. It is our goal. Save it. And now we actually even don't have to use this argument in with the, with the Argo CLI. So now we can just do our go submit workflow artifact template. And it is still used the service account R0. So that's about service accounts, how we can use it to August CLI or or the workflow definition? Yes. 57. More Concepts - Exercise3 Task description: Welcome to the third task in this course. In order to solve this task, you're going to use the concepts and functionalities you've learned in this chapter. The task is to create a master crone workflow which should execute three tasks and which is triggered daily at ten AM. The first two tasks detect bombs and detect a tag. Should each call a workflow template with the name workflow template exercise to create a separate workflow. The workflow template should be derived from the workflow from task two. Bombs and attack are used as arguments for the two workflows. As soon as these two workflows have been successfully executed, a third task, detect click should be executed, which should detect the word click in the email CSV file from task two. However, a separate workflow should not be created here, but read a reference to the deck template and workflow template exercise 2. We learned how workflow templates can be referenced with workflow template ref. Templates can just as easily be referenced within other workflow templates with template ref, I am sure that you will still resolve this even though we haven't explicitly discussed it. Of course, you can use the official Argo workflows documentation to help. So I wish you a lot of fun and of course, good luck with this task. 58. More Concepts - Exercise3 Task solution: Hello and welcome to the solution of the third exercise. In order to solve this exercise, we are going to take the workflow definition of exercise 2 from the last chapter and change it to a workflow template according to our needs. So therefore, I opened here this workflow definition and kind I write workflow template. Then I change the name in metadata to workflow template exercise 2. Then we want to define the namespace here explicitly to make sure that this will be inside the Argo namespace. And what we also want to define is our service account name that we use. Service account name I use are go as well. So this should be enough to use it as a workflow template. So now let's come to the work for Chrome rock flow master. So let's start defining this. And here we choose as kind Quan workflow. Then as name, I just choose quadruped flow master. Here I want to choose the namespace as well. And in our spec, at first, we have to define our quorum settings. And we want to trigger it every day at 10 AM. So our schedule we choose as the first parameter 0, then ten star, star star. This means here. Every day at 10 AM, this workflow will be triggered. So then we have the concurrency. Pull the sea, I just choose Replace. Although it almost doesn't matter what a choose here, because we run this workflow only once per day. So let's continue with the starting deadlines seconds, starting line seconds. I just set it to 0. And now comes our workflow spec. And here I define the entry point. I choose deck master. This one. I have to define later now. And the service account name we have to define here. And then let's define the templates with the first template, our deck master. So there we have deg tasks. And we have the first task is detect bomb. Then we have a task, detect a tech. And the third task is detect cliques. So basically we only have three tasks. So the first task detect bomb. We are going to create a template trigger workflow. This one we can use for detect the tech as well. So these two tasks are going to create separate workflows in comparison to this one. So therefore, we are going to create a new template that will create the resource workflow. And according to our book for definition here, or workflow template definition here. So therefore we choose template trigger workflow. And as arguments. We use parameters. And There. Let's take a look what parameter we need. Here. We can see that it chooses a parameter. So here as argument, we can choose one parameter, let's say. And this is worth detect. And this one we want to set here, we call it worth detect as well. And then the value bomb we choose here. And one more thing we want is we define a parameter workflow template with the value of our workflow template name. So I just copy it from here workflow template Exercise 2. And same thing here. Arguments I can use for our detect the tech task. So this looks okay, just work detect value is attack instead of bombs. And so we already defined our first two tasks, basically, almost actually, or yes, write what is left is to define the trigger workflow template. So let's come to our third task. So this task actually should not create a separate new workflow, but should reference this exercise, this workflow template exercise 2 here. So at first we define our depends logic with detect bomb succeeded and detect a tech succeeded. So this task only will be executed once our other tasks succeeded. And arguments. We use parameters. And here we only use word detect. And the value is click. So we want to check our email CSV for click, for world, word click. And now, how can we use actually our template inside our Chromebook for master? So there we have the option. Instead of template, we choose template ref. And at first we have to tell the name of the workflow templates. So this is this one here. And then we have to tell which template we want to use inside the workflow template. So let's take a look here. Our workflow template exercise 2 starts with the deck template. So this is the entry point we define here, our deck template. This means we use our deck template to be executed here inside our kroner flow master. And one thing we shouldn't forget is the first thing we need to define the argument. This is what we already did. And the second thing, we need our secret exercise to secret and we have to attach it as volume as well. So inside our workflow master. So here in our workflow spec on the same level as templates entry point and so on. We can just copy from our workflow template the volume. And so this is without indent here two spaces and tier two spaces for secret name. So I have my exercise to secret still on my cluster, on my mini cube cluster. I hope you too. So now we can use it. And with our task detect click, we are done basically mallets. Come to our template, trigger workflow. For the first two tasks, detect bomb and detect click. And here we have to define our inputs with parameters. Name. There we have our word detect. I just copy and paste it. And we have the workflow template we want to use for the new created workflows. And now there we use the resource template. And we defined as create manifest pipe operator. And now our workflow spec is here, our workflow definition we have to write here. So I just choose the same API version. Then the kind is workflow, Meta data. I choose Generate name. And here I want to use at first our inputs parameters workflow template. So this input parameter I want to use here for generating the name, then minus. And after this, I want to include the word detect from here, from our input section as well. So this means in case we are detecting the bomb, so the workflow has bombed inside its name and the same for attack. And then a minus. So then we define the namespace is Argo and RSpec. We define a arguing and arguments section of our parameters name, word detect. This is the only one we need for our template here, we took a look, right? And the value we want to use is just our workflow, a word detect parameter. And now we have to define what workflow we want to use, what workflow template. And here we can use workflow template ref with the name of our workflow template we submit here through our parameters inside of our tasks in the deck template and deck Maasai mean. And therefore we just use this one here. We get it through our input parameter. And that's it. Almost. One thing we shouldn't forget is that we have to know when our workflow succeeded, finished, failed, arrowed or whatever in order to define our depend depends logic here. So this we can do using here success condition with status phase equal succeeded and our failure condition, the status phase in failed or arrow. So now we can save everything and basically deploy it. So let's go to our folder. Copy the path, open the command line window, CD to it. And here we see our two Yammer. And now at first, using our goal template, create workflow template exercise 2, we create the workflow template. This looks good. Namespace, Argo and our goal, crone, create. Crown workflow master. Here the same inside the namespace, Argo and mallets. Take a look to our Argo UI, and here we have our workflow template and our crowd rock flow. So let's submit it. And here we can see our two tasks in parallel. And let's take look to our timeline and T, we see two workflows were created extra here we have our current workflow master and then workflow template exercise to bomb. And here attack. And now it's just executing this. We already know from our Exercise 2. So this takes a while here, Let's take a look at our logs. So it downloaded the emails CSV file. This will take a moment. Now, we're reading emails. And yeah, here they are basically waiting until the other workflows executed, then it gets a signal that it can continue here. So therefore here we have to wait. Now we already in our loop. And let's just wait until it's finished. It's detecting. So attack is detected than one email of our seven emails. Now in 3D image, as we can see, email detected and suicide should finish. Now it finished. Let's check here our two workflow, both finished and here soon. Now we can see that the tasks as well finished. And Mao, our logic begins inside our cron workflow. The same logic from our Exercise 2. It's detecting click now getting the source. So everything unexpected till now, at least here our image CSV is downloaded and now it's reading the emails. After the loop should start and detecting click inside the e-mails. And let's see how many clicks we have inside our emails. And this looks good so far. I mean, until it's finished, I can just summarized it. We created a cron workflow with at first two tasks. Each of the tasks creates a new workflow, referencing or taking a workflow template and executing the logic, the definition of this workflow template. And then we created a third task detect click, that is referencing the template inside our workflow template and executing the logic inside our Chromebook from us. And here we can see there's one detect of click. And that's it about this. Thank you for your attention and see you soon. 59. Summary: Congratulations, you have successfully worked through all the lessons of the course. Now, let's summarize what you learned. After taking the first steps, you learned the core concepts in Argo workflows and created a workflow for each core concept. In the workflow functionalities chapter, you got to know various functionalities such as input, output parameters, parameter files, artifacts, Kubernetes, secrets as environment variables and mounted volumes. Loops with sets and input parameters, dynamic loops, conditionals, dependencies, recursion, and retry strategy. For each functionality you created a workflow to gain practical experience. The fifth chapter dealt with further concepts and functionalities, such as workflow templates, cluster Workflow Templates, cron workflows, referencing templates, creating master workflows using namespaces and service accounts, using AWS S3 as logging in, artifact repository and archiving workflows. And finally, you ended each chapter written exercise in order to apply and consolidate the knowledge you learned. I hope that you had a lot of fun in my course, and above all, that you learned a lot. If you enjoyed the course, please give it a good rating. With this, you are helping to keep the course of life. Mao, I say goodbye and wish you all the best.