Transcripts
1. Introduction: Hi there, welcome to this course on running
Docker containers on AWS Cloud with tools such
as ECSC case and forget. My name is runs in nm
instructor for this class. First of all, you will
learn how to build a Docker image using
custom Docker file and then how to push a
Docker image into ECR, elastic Container Registry. It is just like a Docker
Hub where you can store your Docker image up
there on AWS Cloud. Then you'll learn
about AWS Fargate, a serverless solution to run your container application without managing
the infrastructure. And then ECS with easy to, to fully manage your
container application as well as infrastructure. Meanwhile, you will also
learn how to create and run tasks and services
into ECS cluster. At last, you will learn how
to manage Kubernetes on AWS, current topics like creating and running parts, replica set, deployment services, and other communities
object on EKS cluster. There are shot. If you're curious to learn
more about containers on AWS, the enroll this
course right now.
2. Building docker image locally: Hey friends, welcome back. So before getting
started with ECR, the Elastic Container Registry, here we're going to
create a Docker image. You can find main.py
where I have just created a simple Flask application which we're going to show
this spatial message. Keep learning and keep
on with the head. Okay, well, here I have used only one library
which is class, which is used for creating
the web development. Since now, our task is to
create a Docker image. Okay, so let me show you
that before creating the Docker image
that we don't have any Docker image right
now in my system, even there are no container
is running as well. So 1 on your mind, when you're creating
the Docker image, the Docker engine must
been running water. Okay. Now let me show you
the Docker file. The how it looks like, okay, and here I'm going to
use this docker build command to build
the Docker image. So if it has started
the process, now, let's dive into
this Docker file. You will require the
Python, the base foil. And then I have
created a directory that we move that directory. And then we simply install the required library
which is flask here. And at last, I have
exposed my port, which is one that my application will go
into response on it. And at last, we just simply run the main function
for this application. So as you can see on my
screen that as I have here, all the cache before
creating this Docker image, the CD-ROM was time, cause it is downloading this Python base image
from the Docker Hub. Okay? So actually this Elastic
Container Registry is just an alternative far Docker Hub where we're going to
store this Docker image. And then we can run this
Docker image from anywhere. And the best part is that it is totally pair for independent
and lightweight as well. So we have ended
with the process. You can see all the things
are set up right now. Now let me clear the screen and see that our
Docker image is ready or not. This ripe Docker
images and you will find our Docker image is ready. I still don't know, we don't have container running. So now I'm going to
run this Docker image, F3 in this container
contains the Docker image. Okay? So this going to
write darker space, run IT, which means
for interactive. And then you give the board
vote for your application, for your container and the poor for your local
front through which you're going to give the request and the name
of your Docker image. So it is started. Now we got some matter here. The speed is not
working right now, let me check why
it is not working. As you can see here. If we're going to
run this application on the local host machine, that you don't
require the host IP. But when you are going to run your application to
this Docker image, which later on we're going to
create a Docker container. It requires the IP. So the IP which you
need to use here is 0, 0, 0, 0, 0. Okay? And that's all. Now let us run this
application to this URL. And where it is. Yes, that is working fine. Now this is the URL, even you can write local host as well to access your application. Okay? So this is how you
can simply create the darker image
locally and then run that Docker image into
the form of Docker container. Okay, hope you have understand about how you can simply
apply this darker things. So in the next part we're going
to jump into ECR section. So for now I'll keep learning, keep exploring, and
stay motivated.
3. Creating Public ECR Repository: Hey friends, welcome back. In my previous lesson, I
have shown you that how you can simply create a
dark horse lineage. And here in this lesson
you are going to learn how you're going to push that Docker image
into this easier. So from here you can
create the repository. The easier positive three, you will get some options
like private and public. Then you will get some option like to add some name
to that repository. So here I'm going to just
put this my Flask app here. And here, you can also upload any logo for
your repository. You can add some descriptions
and also you can choose to add on which kind of platform your image
we're going to work. So I'm going to select all
of the platforms like Linux and Windows and the
architecture as well. And here you can give your information under this allowed section
than the uses. You can use this
bird claim each. So here our repository
is created. And now you need to push the Docker image here because we don't have
any image right there. Okay, Now, let me give us
some points for this ISA that thought exactly this ECR is so it is a fully managed
Docker container registry. It is just like Docker Hub. You're ready QA,
where you're going to store and manage your
Docker container image. Even you can deploy them. Okay, so this is here can be easily integrated
with the ECS. Elastic Container
Services as well as IQ is the elastic Community Services is just simplifying the process, is just simplifying
the development, production, various other steps, okay, it is just simplifying it. So here, now I'm going to use this set of commands to
push my Docker image, which I created in my
previous lesson 2 up there, do this ECR repository. So first of all, the very
first step is to login. And I've already copied that
particular piece of code. And you can see here that
we have successfully login. Okay? Now we need to build
the Docker image, which I have already. Okay, let's once again, I'm going to do the
same task, okay? So it is again doing the same thing which
we did earlier, okay? Now the first is to
DAG your Docker image. Now we're going to give the name to this newly
created a Docker image. Okay, so we did this. Now, I'm going to, you can see here
the Docker image regard a new Docker
image with this DAG. And this DAG is closely related
to our easier repository. You can see that name. It is exactly similar
to our ISA repository. Now, we need to push our
Docker image to ECR. So we need to copy that
particular command. As you can see that there is
no images in our repository, but after running that command, it will going to push this
darker remains do ECR. So while it is doing its job, let us discuss some
of the points related to this Docker Hub and ECR. So both of them
have this feature of public deposit route as
well as private repository. And this supports MFA for
pushing and pulling your image. Whereas this Docker Hub
doesn't have this kind of job. Second thing is the ECR
has 99.9% of SLA already, okay, where there's Docker
Hub doesn't have that level. Then you can create the
immutable images using ECR. But there is no support for this immutable images
into her home. It may scanning as accrued
of this easier vile, you are pushing your
things up there. It is actually doing the
scanning things as well. But it is there into
Docker Hub as well. But 42 will require
the bay plans. Okay. As we know, this
public repository in Docker Hub is free of cost, and you can also create
one prerequisite, free on Docker Hub,
which is free. But to create some
more variables, a tree you need to be charged and image scanning
will be charged. Okay, So lot of things are
missing in the Docker Hub, which we have in
this Amazon ECR. So, well, these are some of the key differences between easier and
the Docker Hub. And now you can see that our Docker images
is also uploaded, also push to this
ECR repository. And here you've got some
of the image details. Yes, it makes DAG,
the image, URI, the repository name, and a
manifest file, something. So Aaron also throwing
their size of our image, which is 344 M B. You can choose the
Python base image accordingly to reuse the size. Okay, I just use
the default one. Now from here you can
public registry from here. And as it is, as it is Bobby repository, you can view your repulsive free into this Amazon
ECR public gallery. And let us check that our
repository is here or not. Okay, So there are a lot of resources are already present. Let us check it out
that our repository is, they are not in this
gallery, is public gallery. Okay. So you can find that
our repository is there and this is our image. Okay? So now anyone who has the link for this
particular thing and glue going to be pulled
this particular image through this URL for
google dot ECR dot AWS. Then the ID and name of our Docker image and attack
which is clicked this here. So you can get some of
the information's there. Now, I'm going to change
the name interdiscipline, which are showing due to the new name of
customer name, okay? Either you can put your name
and the display name section are due if you want to be anonymous original,
want to show you name. So you can simply write any
random name and nickname. So here I've used this
AWS user account, which is set for my
account, it a root account. So now you can see that there is default
LES which is have some particular X is there and Customer ID as is the
name which we put down there. I went to refresh this. You can see the
name is not changed just by AWS day
reading 21 and ID, which is there will be dreaming. You can see double-dot
easier or AWS dash. And that particular
thing is there. So from here you can,
if you want to do some, any changes there, if you're
also going to reflect them into that
particular gallery, hope you understand
about this ECR. Now, keep learning, keep
exploring, and stay motivated.
4. Creating Private ECR Repository: So let's get down to the business and the
previous lesson I have shown you how you can
create a public repository. Now in this part, we're going to create a private
repository here. Okay, so as you can see, we've got some numbers
up there in the URL, some number I think. So basically it is the ID
for this repository, okay, to uniquely identify
this private repository. Okay, so we have successfully created
the private repository, and here we don't have any
Docker image right now. So now we need to
follow the first step, which is to retrieve an
authentication token and then authenticate our Docker
client to the registry. So I've copied section
of code here and now we have successfully login to our right
is free as well. So now we need to build
a Docker image. Again. It is not required because we already have
the Docker image. Now we need to use
this DAG command, the Docker tag command to
give the DAG to this image. Okay? So if I'm going to run
this Docker image one, you will get this
public.routes.js one. And then we have another Docker
image for the repository. Okay, now we have done, we need to push this Docker image to
private repository of ECR, okay, through this Amazon ECR, we can have asthma as,
as private repository. We don't have any
limitations there. But in case if you're
using Docker Hub, then you will got only
one parent was a tree. Okay? So if you want to create some
more depository up there, so you need to pay
some charges, okay? Whereas in ECR, you can create as many as the
private cluster tree. Now as you can see now
we have successfully push our Docker image
into this repository. So now we have the
same Docker image on private as well as
public repository. And here you can see
different options, different details of
this particular image. Now, let us explore some of
the other options, which are, they're like, I'm going to
scan this particular image. Okay. And it is shared that
happened our report that about this Docker image. Okay, so Dylan, we can begin to explore some other points like
Lifecycle policies. If you want to remove
your unused images, then at what time it
we're going to remove. Okay, So you can set different lifecycle
policies up there. Then here we've got
some vulnerabilities. Not so actually it
is 502 or liberties. Okay? So don't worry, is this a sample Docker image
which I have created? It doesn't mean that it doesn't
have any kind of duties. So these are this is the
benefit of using this docker, sorry, this amazon,
easier to wish, you can do some image scanning. And here you record some
conscious of informations like there is only one critical
vulnerability then 15 Hi, and rest are like they are only abilities but they are not too much highly concerned. Okay, so don't be worried. Don't see the number,
that big number there. So that's all there
is and how we can set up this private
and public repository using this Amazon ECR. If you have any kind of doubt, you can ask me in
the Q&A section. So for now, keep learning, keep exploring, and
stay motivated.
5. Creating cluster for AWS Fargate: Hello friends, welcome back. In this lesson, you
will learn about ECS Elastic Container Service. It is basically manage
Container Orchestration, which is hosted on AWS through which you can manage your
container life-cycle. Like it includes the
provisioning of your container, then deployment of
your container, scaling up, scaling down, networking, the load
balancing, and much more. You can do with this shop, with this container
orchestration tool. It is just like a Docker Swarm, the Cuban 80s, even OpenShift. They are basically the
continued oppositions. And it is required because once you have multiple containers, it will become quite
complex to handle them. So it will going to automate the process and removes
the complexity. That's all about container
orchestration and the ECS. So let's create our
first cluster here. And here you will get
three cluster downplayed. One is networking
only downplayed and easy to linearize
plus networking, then EC2 Windows
plus networking. Okay, let me discuss
each of them. So the very first amply which we have is, is networking only. It is used to create
an empty cluster, which is typically used for the container
instance which are hosted their own target are any experiences is
like easiest anywhere, which includes on-premise
services as well as on Cloud. Okay, So that's another thing. So then we have
is it Julius plus networking and EC2
windows personnel. They both are similar as they're both going to use the
EC2 instance here. One is for creating the Linux container and other one is for creating
the Windows container. And the rest of the
things are the same. Okay, So now here I'm going to select the
networking only downward as the purpose here is to create the AWS Fargate Services, which is a serverless thing. Okay, now you need to
configure your clusters, give any name for your cluster. Then SHE, you can also add
the VPC into this template. So I'm going to create
a VPC for this. And if you want
to add some tags, you can even add
some tags into it. So now we will have ECS cluster. And here we are to create
a fog or services. So fire services. So let's compute
for the continuous. With the help of this Foghat, you need to only build
a container image. Then you need to define
the memories and the compute resource which is required to run your
container image. And then if we're going to run and manage your application. So here, the best part is
you only have to pay for the requested
computes your sources when it is we're
going to use, okay? Not like if you're
going to run with EC2, you need to pay for that
portray EC2 instances. But here the picture
is different. Here you need to pay only for the requested
compute resources. Okay? So the ASA best part of
using this AWS Fargate. Here we are using our 80 was fired and later
on I'm going to show you as well then how you can create a cluster with the
EC2 Instance as well. So as you can see that our ECS cluster has been created and some of
the things are left. And it is also
computer right now. So we have created the
easiest cluster for the AWS Fargate with the help
of CloudFormation stack. Okay, so we have a cluster
with some PPC and subnets into it and to manage the
serverless compute containers. Okay, so in the next part, I'm going to discuss about a task and the services as well. And then we're going to
deploy our application. Here. We will get option for
updating your cluster details. And you can also able to delete your cluster from this
option. This option. Okay? So that's all for this lesson. As we have created the
cluster, the cluster here. Okay? So, so far now, keep learning, keep exploring,
and stay motivated.
6. Creating task for AWS Fargate: Hey friends, welcome back. In a previous lesson, I've shown you that how you can hear the ECS cluster using
the next forking only template for AWS Fargate. So let me revisit
some of the points which are very important
here, like this. Aws Fargate is a technology
that you can use with Amazon, easiest to run your container. And here you don't need to
manage the EC2 instances. So with AWS Fargate, you don't don't
need to provision, configure our scale, the cluster of your EC2 instances
to run the application. So it removes the problem of
choosing the server type, then deciding when to
scale your cluster and how to optimize the cluster
and various other things. Here, it will just
going to charge like whatever required
compute is there, okay? And it is very easy to
set up the AWS Fargate. And here I'm going
to show you how you can create the task here. Okay, So let's get into it. So this is das deficient page, and here you can
define the task. So basically, we are going to specify the
container information for our application at how many containers are
required for this task, how much resource that
we're going to utilize, how they are linked
to each other. If you have multiple containers and on which we are going to response and various
other options. Okay, so let's click on this
Create new task definition. And here you will get three
options when it's forget ec2 and external which
is compromised in, okay. Now here you need to configure your task and continuing
the Phoenician. So either you can
create the new das rule if they're already
just select that one. And then you need to give
the definition name here. And then the natural mode
which is already selected, which is AWS VPC. And you need to choose
the execution angle here, which is same as previous one. Then you need to say that
task size, the memory, the CPU utilization here. Now, here, the main thing is
to add the container, okay? So I have already pushed one of my Docker image
to Amazon ECR. Okay. It is just like a
Docker Hub where you push your Docker
image up there. So you need to give the
name of your container and then you need to
specify the English name. Okay? So here I'm using this
prior repository URL, okay? And you need to provide
the pool number two, which you can actually do some port mapping to
your application. Okay? So this is, this is my
application basically. And here I have
used five naught, naught, one port number. Okay, so you need to
provide that stuff here. And you can also add
some health check, like at what time your continuous responding
and then timeouts, and then storage and logging, and then resource limit that at that resource if you're
not going to proceed. And you can even add some darker labels for
your understanding. That's all. So you can also integrate your service integrations and various other things
you can set up here like volumes as well. You want to give
some extra volume to this container if
it is required. Asked my application doesn't
require that much of thing. So not selected that volume tin. And here you can either choose
the EBS, the EFS things. Okay? So this is the JSON format from my template for my
actually task definition. From now through this action, I can run my task. There is an option
for update as well. These are some of the options. If you want to lose due to some changes you
can do from here. So there's all, in
my next lesson, I'm going to show you that how this task we're going to work.
7. Running Tasks for first time: Hello friends, welcome back. So before running our task, let us revisit some of the
important concepts which I already discussed about
Amazon easiest dusk. So this does definition
is required to run our Docker container
and Amazon ECS. And where you can set the
Docker image that how much CPU and memory we're going to replace by
our application, then what type of launch
we are going to use, like Fargate, easy to
our on-premise things. Then you can set up some
networking settings there, the logins, the volumes, and involve variables, as well as am rules and
lots of other teams, which we can figure
into this that task. Okay, so here in this lesson, I'm going to show you the
how you can run the task. So here you need to select, again this lunchtime, forget. And here you need to select
the number after task. Okay, here I'm just going
to run only one task here. Then the net flow settings, you need to select all
the default options. Now, all the things
done now, now, let us dab this run Task button. And yes, as you can see
that under this Data tab, our task is now provisioning and the desired
state is the running state. We'll take a quiet time. And once it will ready, then you can easily
access your application. The application which is
derived by the Docker image, which we're going to create
the Docker container. And that's how you are actually interacting
with your container. Okay? So in the next part, I'm going to show you that after it will be in
the running state, how you can access your
application, That's all. Keep learning, keep exploring,
and stay motivated.
8. Accessing the application and creating more similar tasks: Hello friends, welcome back. As we can see on my screen, our task is in running state. And these are some of
the details of our task, that network here you can
have your private IP, the public IP, then NID as well. Then here you can find the
logs of your application here. This simply means that our
application is running state, okay, our container
is in running stage. So now our task is to
access our application. And to do so, we do lose another
network configurations. Okay, before it, let me show you some more things
about this task. And as you can see, this is the bunches of
information about our container. Okay? We didn't configure that
measure of settings, but still our application
is running state, okay, Now this is our public IP. Now I'm going to use a
particular public IP to access our application. And our application. We will going to use this
5000 one port number. Okay? So as you can see, we, the side is now reachable. We simply means that
something is still missing. And here we're going
to figure it out and to solve this problem and then access our application. So here you will get this
network interface ID. Inside this natural
interface ID, you'll get information
about your security group. Okay? Inside the security group, you will get option
about the inbound rule. The inbound rule. We're going to define that what kind of graphic we're going to be accepted by your
application, your container. Okay? So now I'm going to
enable this poor thermal 5000 one here inside
this inbound rule. And anywhere I
before I'm going to select here and then
going to save this rules. Okay? Now we have made
some changes here. Now you can see inside
this inbound rules, our custom rule is also added
inside this security group, which is attached
to that, our task. Okay? So now I'm going to, if I'm going to run
this particular thing, then we're going to work for us. Okay, let me check
one more time that the configuration which we did as still there or not,
it is there. Okay. So well, our configuration
is, has been completed. Now as you can see, we are able to access
our application. So you need to do some things, like you need to add the inbound rules to able
to access your application. Now, I'm also going
to show you that. Suppose if you're going to have high traffic and then
how your application, we are going to respond to that. Okay, so it's better
to region the task means till law we have only
one task for our AWS Fargate. But instead of running one task, you can even run some
more similar task, okay? So it, we're going to
balance the traffic. So let me show you that
how it went too far. You need to use
the same settings, like the default setting
which we did earlier. And our task is now. And they're promising straight. Now once it will already, we need to do the same chain is again inside this
network interface ID. Then you need to go
that security group. And then we need to change the inbound rules for that
vertical task as well. No, let me do the
same thing again. So this is the inbound rule. We don't have that bone. So your custom TCP, then you need to put
your port number through which you are able to access your application,
IP anywhere. Okay, so we have successfully made some changes to
our newly created task. Okay? So now let us figure out that our application are going to
work with this D2 or not. Okay. So again, verified that Brooklyn more
rules was present or not. Okay. So we've got another
IP and other URL and let us open into a new tab. And you need to put that
port number as well, which is 5000 one. As you can see, we are able to access our application
with two different IPs, okay, So to shift
their graphics, this is all about ECS. Forget how you can run your container image to
deploy your application. So keep learning, keep
exploring, and stay motivated.
9. Creating cluster for AWS ECS with EC2: All right, now you are
already familiar with container related keywords
like container itself, container orchestration,
how this ECS cluster World, Order, AWS Fargate, and these are the things we have
already discussed about it. In this lesson, we're going
to create the ECS cluster with the help of ec2 Linux
networking template. So let me review some of the concepts like we
have through damn kids. One is networking only, other one is easy to
Linux networking and the tarragon is easy to
Windows plus networking. So the first template, which is to create
the empty cluster, and we don't require a dedicated
infrastructure for it. We are totally focused on the
Serverless cluster, okay? And now we are creating the ECS cluster where we have
the EC2 Instance as well. To how our complete control over the infrastructure
where we can run that task, our services on that cluster. The cluster which
we'll go into it run inside our Amazon EC2 instance. So if you want to run a service or your task on a
serverless infrastructure, then go for AWS fiber. But if you want to have more control over
your infrastructure. So then you need to
use this tablet where we have set up all of the things with the ec2 Linux networking. Here, the only
difference between is usually Next and the EC2 window is that in ec2 Linux, it will going to create
a Linux container. Whereas in the easy to Windows, it will going to create
a Windows container. And that's only major thing
which you need to make sure that what kind of
container you require. Okay, so in both the case EC2 instance where
we're going to create. Now this is my
CloudFormation stack. If you're already
familiar with as you, then you must be know
about the resource group. This is just a neutral
resource group where you can manage multiple resources
into one place, okay? So you can delete all of the resource and just by deleting the
CloudFormation stack. Okay, so now we have
created the ECS cluster, the half-halt
confirmation stack. Now this is our cluster, EC2, EC2 cluster, okay? And the status you
can see here it is active and there is no services, no datasets running up there. And inside this
ECS instance dab, you will get the
Container Instance. Let's check it out. Some of the details of
our container instance. Here you will get the
name of your cluster than the EC2 instance
ID is also there. Then the operating system
is Linux and LD zone. Then the public and private IPs and lots of options
are available here. Okay? So if you know about EC2, that it will be
quite easy for you because you're
going to do some of the changes in the
EC2 Instance as well, so that we can easily access our application which is
running inside the container. Okay, so let me show you about this EC2
Container Instance. So here it is, it is. So this is our EC2 instance. It is also in running state. You can see as well. Here you can have the axis about your EC2 is using this d2 dot micron and the
status is not completed fully, as I can see on my screen. It is still in the lasing mode. So once our instance strep, we will be able to
connect our EC2 instance. Okay? Now here you will get public, private IPs to
communicate ec2 instance. Now these are some tales of EC2. Now you need to go inside
the security group. And here we need to set up some inbound rules so that we
can access the application. So as you can see, only
one inbound rule is setup. So now I'm going to configure, actually I'm going to add
one more inbound rule. So all DCP and then
save the rules. So whatever, there is no
restriction right now, okay, it can accept
any kind of protocols. Okay? In the next lesson, I'm going
to show you that how you can create a task
and the services. So far now, keep learning, keep exploring, and stay module.
10. Defining Task: Hey friends, welcome back. In a private lesson,
I've shown you that how you can create a cluster, the easiest cluster with EC2. Now in this part we're
going to define the task. So given a name for
your task here, then here we will require
this easy to capabilities. They are usually the dusk
role than network mode. And here I'm going to
select the default one. And here you just select the
IAM role, that dust size. You can put any number here
according to the comment. Okay? So here you need to give
this trick number ten, I think Wednesday we will
do sufficient or 500. Okay? I'm going to
give finite value. Well, then from
here you can set up some volumes and
various other options. You can add something. Here. You need to also define
the container, okay? And here you need to set up the Docker image to which I
continue. We're going to run. Okay, So this is my container, that Docker image which I have stored up there into ECR, okay? Now here it will also
require some things here that that particular
port you want to expose your container
and through which you can access the
particular application which is running
inside your container. So this is my application
where I fused. The port number is five,
naught, naught one. Okay? So here I'm going to use this particular number
here, bipolar 1. And again, you can give
any number ADD as well. It ADD and it tg and
the other one. Okay. So these are some
of the port mapping which I'm doing care for the host port and
the container port. Now here you can add some more storage security
labels, the source limits. And I don't think
so that anything is required for my
application here. Okay, So here we've
got some arrow and I. Host port must be unique
across all containers. Okay. Okay. So I'm going to remove some
of them from here we go. It is giving an arrow here. So I think this 5000 one and the other one
would be okay here. Now I've added a container. Okay? You can also add even
health check also. Now the thing is ADP and
vitals and one, okay. This is the thing
which I have did here and now our task
explanation is created. Now, the next part, I'm going to show you there how you can run your desk and how you can
access your application. So the options I have left because I don't
require it now. But you can use if
you already found little bit darker and darker
related things, that's all. Keep learning.
11. Running Task: So now I'm going to
run our task here. Okay? So here you can
see that there are some up dot-dot-dot
mission which I have already declared them. This is the definition which we configure in
a previous lesson. And here we've got some options like action through which
you can run your task. You include a service from that, das declination and so on. Things easier services I'm going to discuss about in
the next lesson, which is to run your container for a
longer period of time. And you want to run some of your container
all the time as well, which we can do with
the ECS services. Now we have run the
easiest task here. As you can see this
in the pending state. And once it will change
into the running state, which is that it's
dire one here. Then we will be able to access
that running container. And they're running application
inside the container. So while you are going
to create a service, it eventually we're going
to create that task, okay? And it will be a
self-healing task. Let whatever happens is
does any error comes into it and, and it failed. Then we're going
to run a new task and that particular dusk, the full task, okay? So our task is now running, which simply indicates that our application will
also be running state. So here you will get
some of the options. And let me open each of
them into the new tabs. You can see here you got some public IP
and the private IP, as we'll select us, use this public IP to
access our application. You can see here that we are able to access
our application. This happens because we have already set up the
Inbound Settings. Okay, That's fine. We don't got any errors here
and open this one shot. So here you will
have public IPs, private IPs, the state, as you can see from here, the running status which is
one here, and the resources, the mammary and
the port which it is utilizing it you can see
all of the things up there. And there's all,
this is how you can create Damask and access
your application. In case you want to
update your tasks. You can also do this. And in case you want
to run SAS does. You can also have that optional
so oblique or the air. So it's very easy to run your containerized
application up there and to AWS
can tear ideations options like ECS forget and ECS. Ec2 was an okay. So we got lots of options there. And that's all. So if you have any kind of dogs, you can ask me in
the Q&A section. So for now, keep learning, keep exploring AND
gate moving ahead.
12. Creating and Running Services: All right. None this part, I'm going to
talk about easier services. So before jumping into it, I'm going to discuss some of
the important things related to ECS services and ECS dusk. So you might be confused with these two terms because
it is das and services. They're going to
do the same thing, but it makes a difference
between them is the easiest task is used
for short-term task, for short-term goals, and various easier services is to do when you want to run something for a longer
period of time. In a previous lesson,
I've already shown you how you can create the easiest task and how you can run a container with
the help of easiest task. We actually define the task with the help of our
task definition, where we configure some of
the settings related to containers like which document
you're going to use it, on, which port you are
going to expose it. And then CPU and
memory utilization. Then you have learn about
the involved variables, the volumes, and
various other stops we have seen up there by
defining the task. Now here, let me summarize you something
like in simple words, you are running a task is
like launching a container, which we're going to stop after some times because it
is for the short-term. But when you're talking
about ECS services, which will guarantee you that some of the number we're
going to run all the time through which it gives a feature of
high availability. And the self-healing things like suppose you went off
your Container Store, just causal so and so errors. And you want that
particular container to be run all the time, then you need to find the
easiest services like that container we're going to stop due to any error come out. And it'll going to run again, okay, actually create
a new instance for it. So this is a major
difference between easiest task and the
easiest services. So here, now I'm going to
create an easiest services. Here. All the things we will see, the configurations
which we then ALU, all the things will
remain same here. So let's get into it. Now. This is the dashboard. Now here you select, you need to actually
configure your service. So first of all,
you need to give the name of your service. And then the long-term which
we have already selected, then the number of tasks. And that's going to put
a one number here that add any point of view. That particular container
must be running, okay, in that particular
one container, much for running at all times. Now here you can put that load balancing
settings as well here. I'm just going to
give it a none here. Okay? And then there will be an
option for autoscaling as well. You can see it is also optional. So I'm also going to set dual. Do not do auto
scaling here, okay? So that if I'm going to
enable this fissures it, we're going to charge as I'm
using the free tier one. And it doesn't allow me to
use this feature right now, like auto load balancing, which we're going to balance the load between
different containers are autoscaling is like when the load is there or the effluent to scale
that particular task, that particular
services, the number of continuous we are going
to be rice, okay? So these are things
which are here, which you can do while
creating the services. Now in this, you can see
that our service has, we can create it. And this is the it is in a running state
as you can see here. So service created a
one running task, okay? And here you can
see that there are two ports. I have labeled here. And you can see that it is
accessing our application. So I've used the task definition which I have defined earlier, through which I have
created the services. And that's Services
has created the task. Only one task, okay? Now this is our EC2 instance. You can see the instance
is running right now. And there's lot Often you
can see here like public IP, packet IP or in 1001. Then the different
options that are there, you can see that require CPU and our liver
is even right now. The mammary, the boards and various other things
you can see here, okay, from here you can also update your agent as well as you can de-register this
particular services from EC2 instance, okay? So this in my cluster, this is my container instances, and this is the running service. Now from here, you can also update the configuration
of your services. Okay? So don't want to do
any update here, but you can see that there is placement damper is also there. How it will going to
create a new instance once any container failed
due to some error. Okay? This is a type of a replica and ReplicaSet and Q NAND is or any other
container opposition. It, we're going to maintain
the number of replicas. That the same way these services is doing a job here, okay? And now I'm going to delete
this particular services. From here. You need to just write delete
me and also as deleted through which a
task which has been created with the
help of services will also going to be deleted. And wedges are the things
we are going to happen. So this is how you
can create the EC2. So ECS services and how it will going to run your container,
your application. Okay, that's all. For now. Keep learning, keep
exploring and keep moving.
13. Installing eksctl: All right, before getting
started with this Amazon EKS, we're going to discuss about
this command line utility, which is EGL CTO. This is a command line utility
which is used for creating and managing the Q&A does
cluster on Amazon EKS, through which you will
be able to create your cluster we quickly
and very easily. And the best part is it will
also going to create a node, okay, with the cluster. So far it you must have this EGCG tool installed
on your system. So if you already
have this chocolatey, then you can easily run
that particular command. But in case if you don't have to install this particular package, to install another
package, okay. This is basically for the Windows user as
I'm on my windows. So the asphalt, I've already
installed it chocolatey. Now, let me run this command. And as you can see, there is a prompt that this ECS teachers already
installed on my system. So in case if you want to
either update this EKS CTL L2. So you need to copy this
JOCO upgrade, haven't y? And then equals C2. And if we're going to
prompt the Continue, and Okay, we got
some arrow here. Okay, that just
occurred because it doesn't have the root
access on Windows. Root access simply
means you need to run your application
with admin power. Okay, so I've opened
this command prompt with admin power and to
install that equals C2V2. Actually, I'm already
installing it. I'm just upgrading it. Okay. Now let me check it out. The version of this EQ CCL2. Before it, you need to add the environment variable
for this command line. So you need to open this
edit the system Raman, and then you need to
add a new bot, okay? Once you, we're going
to add the directory where this equals C2 to recite. Then if you're going to easily accessible through anywhere
from the command line prompt. So now I'm going to run this CTL war zone
to check it out that we have successfully
installed it or not. And as you can see on my screen, we got some number 0.70. We simply means that we
have successfully installed this agency TO tool
on our That's all. In the next video we'll
create the cuneatus cluster. So far not keep learning, keep exploring, and
stay motivated.
14. Creating Cloudformation stack: Hello friends, welcome back. Now in this part, we're going to create the CloudFormation stack. Okay? And here I'm going
to deploy one of my downplayed which
I've already created. Okay, So this is the pamphlet. See f dash, downplayed dot YAML. I'm going to share this
damper gambled foil. And it contains a bunch
of information that how the scar formation
step we're going to create the different
sources and apply them. So before giving a whole
kale about the template, let me stop the process because it will going
to take a lot of nine. So it's better to
start the process. And then we're going to
jump into the template bar. Okay? So as you can see on me to look for
this conformation. And here we've got the
CloudFormation service, which is able to create a managed resources with
the help of tablet. And it is just like
resource group on as you, where you can manage
multiple things, multiple resources
at one place. Okay? It is there on management and
governance section, okay? You can also go in, you can use the managing golden section
to use this conformation. Open this conformation
using this search data. Okay? So here you can see we already have one
stack or the Alexa skill. Ok. So here you will get an
option Create Stack, where you, through this, you can create stack GUI way with the help of this
address console. If you downloaded 30, then you can put the S3 URL
or either you can upload it. Okay? So leave that case. We are going to set up this
CloudFormation stack through this command line prompt
and started a process. I think it is waiting
for changes to the creative than waiting for the deck create
amputate to complete. So I think the process
has been started. Well, as I said earlier, that it will going to
take a lot of time. Okay, so Let me see that it is created
the instance or not. It has created a
stack and it is, the status is in
creation in progress. Okay? So you can see that 27
events are happening and some of them are
completed and some of them is still in
the progress mode. So let us have a look
on this template. And you can see here that here we are actually
creating the VPN. Then we're creating the subnets. And then we're going to add that somewhat
intuitive metadata. And in the subnet part, we have created
two public subnet and 2 private subnet for it. You need to fix that
route table for them. Then you need to attach
all of the things. And then you can see lots
of information out there. Then you need to Azure sheet all of the things with another. So this is a whole
downplayed and I hope part that governing
this whole template here, we're going to take
a lot of time. So I'm going to share
this template with you, can have a look and use this template to create
the CloudFormation stack. Now as you can see
on the screen, we have successfully created
this CloudFormation stack. Here you can find some of those sources are still in the progress mode
where it doesn't matter because the thing that you want is already
created and deployed. So we have a look about that resource online
next lesson there. So keep learning, keep
exploring, and stay motivated.
15. Starting up minikube cluster: Escaped down the business. They'll know we have
setup the IQ is CTL command line tool to create and manage the
communities cluster. For Amazon, the Amazon EKS none this part if you want to do some Kreuter and on
your local system. So there must be have a Kubernetes cluster running on your local system as well. And here to do so, we need this mini hue
and this molecule. We're going to create a single
node Kubernetes cluster on your local system through which you are able to
do some brighter. And for beginners, I
will suggest before jumping into the Amazon EKS, you must do some dried on here because amazon EKS
is not labeled on free to do so in case
if you're creating a new cluster and enabled
to run your application. And then if we're going to
charge very high, okay, so the first thing is to
start a mini cube cluster. And here, by default the
driver is VirtualBox as I have the Hyper-V
capability on my system, That's where I
select this Hyper-V. If you don't have
this Hyper-V as well, you can also use a docker
machine elsewhere. Why do you need to install the Docker and run that Docker Engine as
well on the background. Now you can see on a screen that it started
the control plane, which is the main
master node here, as it is a single cluster, so only mastering or when they are no
organelles to be here. So it is now going to prepare our communities on Docker 2010, this is the username,
basically for this mini q. Okay, you can see on
the screen that gets generated that
certificate that it is putting on the control
panel then covering the RBAC rules and vary from all the communities components. Now using and see
that this cube, cube CD version, as you can see, that's simply
signifies that we have successfully started the
communities cluster. Okay, So these are the versions which you
can see on the screen for client-server or n
farther, several side. Okay. But this is a cumulative origin. Now let me show you some other
Wasn't as well for ETC2. And this book, things are
required for upcoming lesson. And other than this, I don't think that
anything is required here. Yeah. One more thing
is required is the AWS CLI because
this will be able to do X is though AWS resources and magnesium
from the command line. Next, I'll keep learning, keep moving, and stay motivated.
16. Creating Pod with ECR image: Don't know we have settle our humanities enrollment
lack installing the Iike, a CTL command line
tool for creating and managing the cluster
for Amazon EKS. Then we have started a
mini cubed cluster for dried run on our local
system, non-sparse. And we're going to
focus on the part. I'm also going to create
the part as well. So what the part basically is, it is the smallest unit
of the computing that you can create and manage
inside a Kubernetes cluster. So let me explain you
with a simple example. Suppose you have a hotel and that total is
accumulated cluster. And inside that total, you have different rooms. And that room is
particularly BAD. Okay. Inside your room, that
which is pod here. Inside your room, there
is there is there is a chair that all of the things
are basically a container. Okay. I hope you got it. So pod is a group of one or more container
which actually sharing the same storage and
networking resources to run containers. So just like inside a room, the table, the bed is also
sharing the same space. They also share in
the same room number. And that's you can
pick cried the thing. Understand the part
thing is okay. I hope you have a clear
picture about the part, part inside a
Kubernetes cluster. So now in this lesson you
will learn to learn that how you can create the pod
with the help of YAML. So let's do it. So first of all, we need to create a simple
YAML file in B15 basically. And I'm going to name
it as Bardot yaml here. Okay? Now, the very first
thing which you need to put here is the API version, which is for x1, v1 to create the part, okay? And then you need
to give the kind. Basically there are lots
of objects which you can create an
communities and parties, one of the kind, okay? And then you need to
give the metadata. In the metadata you need to provide the name of your part. So I'm going to name
it as My first part, and I'm going to also add some
labels to this part, okay, So whenever I'm going to
create some similar parts, so through this labels, I will, I will be easily manage all of the parts having
the same labels. Last one, I'm just putting
this label here, okay? Now, after metadata, you need to define the spec,
the specification. So basically you are running the container inside the part. So you need to mention that template for
your container here. So here I'm going to run one of my Docker image to
produce container, okay. So you need to name
our container as well. So I had just taken the name of my pod and then
dashed container. Okay, now I'm going to
open my ECR repository. So this is my ACR repository and here I'm going to use my
priority project for here. Or it can even use a
public repository as well. So you can. So I'm going to use
this public policy and electron going to use that
private repository as well. So you can see here, you have defined for
important things. Like one is API wasn't
the kind of metadata and the speck in the
spec you have given the tablet off of your container and you have given the name of
your container. And also, I'd ask the image through which it we're going to
create a container. Okay? So you can see that mini cue status as Nicky of stages
is in running mode. Here you can see here the horses running the cube
root is running. The APS always
running and all of the things as setup. Okay? So many to create. The part with the household
is Cube CTL command. Before it, I'm going to
show you that we don't have any park, Ronnie. Okay. You can see only one
service is still one resource of
communities is there. Now, I'm going to create
a club, this part. And to create the part where the Habilus YAML unit to use this Cube CTL create hyphen F, and the name of your YAML file, which is part dot YAML here. We're going to create a part. So what we're gonna do, if that particular image is
already there in my cache, then it will going
to run it as soon as possible as you can see here. So it was there
already in my cache. It immediately in case if that program is not
0 inside here cache. So it will going to push it down with their
particular image. So if we're going
to take a tank, so the half of the cache, It's just run immediately. Now I'm going to expose
my this container. And you need to give the
name of your pod here. And then you need to give
the name of your service, my five-part SVC, and then you need to give
the port number here. So the application which app categorized into the
form of Docker image. We're going to listen
on port 5000 one. So that's why I've added
this 5000 one here. And you need to define
the type of your service, which I'm going to use
here is node port, other. Then you have the load balancer. Okay? So here I'll just
use this node pool, which is also created. You can see now we are going to access the application
which is running inside the container
and which is created via that Docker image. And that container basically
is inside the part, and that part is basically
inside the Kubernetes cluster. Okay? So that's the nomenclature
that you can find here. So this is the IP
of my mini cue. So to access your, your application, you need
to use this miniature Yp. And you need to use
this port number, which I found from this
so many Queue Service. And then your name, service name, which is my
first par as we see here. And voila, it, we're
going to generate a URL. And if we're going to open, my application is working fine. This is the IP of my
Yemeni q and that port, you'll have found them
inside that service part. Okay. So that's all. This is how you can
create the pod and then your cluster, the
Kubernetes cluster. So in my next lesson
I'm going to talk about another objects
of communities. And we're going to dive more deeper when
we're going to deploy this part on EKS,
okay, That's all. Keep learning, keep exploring,
and stay motivated.
17. Creating Replicaset and Scaling UP and DOWN: So in this part, we're going to create the
replica set for our part. Okay? So basically these
replicas third, is to replicate your part. Like the law, we have only one part which are running
some multiple containers. And the key is the traffic. The demand is now increasing. Now you need to handle it. Then you need to create the rapid onset so that it will going to
balance the traffic. The traffic is do much high. You can scale up your part. If you're traveling
is too much flow, you can scale on your part. Okay. And the best part of using
the applicant said is like, when were you going
to delete your part or any accidentals happen on the part that we're going to create a new part in that place. So that's why this
replica set is very, very useful object and acuities. And here you can see the template which
I'm using for Replica is similar to the part I just changed the kind i
inside the spec. I have given the
template of the part and the part I wasn't
tampered or the container. And that's the thing. So this replica
set is the subset, sorry, this part is a
subset of the replica set. The container is a
subset for the part. Okay? So I'm going to define the replicas like how
many number iPhone that these number
of parts should be run whenever I want to
access my application. So here as given the
number 23, okay, and at last you need to
mention the label so that it will going to
attach this replica set t to the power which is running already up there on our kidney
disease cluster. Okay? So let us
create a replica set. Okay, we've got some error here. The inundation problem
was that said, we will let us region this command and you can see
that our artists created. This is a short form
for replica set. Now, you can see that replica set desire represents an MBA and the running replica
set number is equal to simply means that three
parts are running right now. These two parts are created
later and previously running. The part is still running. So under that
particular replica set because all the labels
which we attached to it. So now I'm going to
show you some of the other features
of this replica set. Okay, so you need to
just write CuCl2 to describe auras and the
name of your replica set, which is my dash iris. And here you can see these are some of the events
which are there, as you can see, the two
parts that period as y. And so the replicas number is three and the running to
this as the information which you can get from this
describe command line. You can see kubectl get pod. We have three. Part is 0. Now I'm going to
delete one of the part and let us see that what
we are going to happen, this right QC to delete part
and choose any of them. So I'll delete that
particular pod. Go to the time it is deleting. Let us open a new tab
or window for awhile. How much time we're
going to take it apart. A container. Yeah, interesting,
too much time. So let us check it here. Okay, usage will get part. Now you can see that one pod which would lead
to is terminated. But as soon as the
termination process started, it created the new
parts I will instantly. So this is the feature
of distributors sit, and which makes our application
more highly of label. And as well as with the
helper replica set, you can also scale up, scale down your
applique, your part. Let me show that
feature as well. So for it, you need
to either you can change the number of replicas
inside your eyes, dot YAML. And then you need to reapply
that particular definition, rhetoric as a definition file to change the
number of replicas. Instead of doing that is better
to use this command Cube, CTL scale and replicas as an attribute and the number of replicas which you want to have. The electron, we run this
command and basic care. And instead of R is Steph Curry. I'm going to put that in
my first part, dash iris. So till along we have
only three parts are running the replica
number or set to three. Now instead of three, it will going to kill 25. So let's check it out
that how it works. Okay, so now I'm going
to run this command. The number of replicas, which is I have set
five-year, Okay? Now you can see that
it has a message that our replica has
been scaled already. Check it out the
number of parts, and you can see not too
new parts are created. Okay, So this is how
you can scale up your application with the
help of this replica set. Nine key is we want to delete. Actually, you want to scale
down to the number of parts. You can just change the
number of replicas. And you can see that three of the parts is in
terminating students. So this is how you can
use the replica set. Far scaling up and scaling down. Okay, So that's all
about replica set. In the next lesson, we're
going to learn to deal some other things related
to Q and 80s so far now, keep learning, keep exploring,
and stay motivated.
18. Configuring Kubernetes cluster: They'll know we have
seen how you can create the pod and run on
Kubernetes cluster then about ReplicaSet and how our
application is highly of label and scaling up and scaling down lots of other things
which we have seen earlier. Okay, so here we are going
to create the cluster. Actually we're changing
the cluster configuration to the Amazon EKS one. Okay, I didn't know
that particular cluster is set of Four Loko enrollment. Now in this part, we're actually changing the
cluster configurations. Okay? So that configurations
which you can see, which I have attached a file, it basically, I've found
our CTO homepage. Okay. So I've copied that
particular thing from that page and yep, I started created my cluster. And again, I'm going to say
that the file which I have used here to create the cluster will go into
share to you as well. We let b explore some of the
points about this cluster. So you can see that we have
defined different subnet. Then we have defined the public and private
subnet than VPC. Let me show you each of them. So this is the subnet
section and here you can have where it is. Okay? So let me see. Yeah. So here you can see this cumulative CEO
stack is a name of our CloudFormation stack. Then we have four subnet
to private and public, which we have created
with that template file. Okay? And so you need
to put that ID for it, okay, then we have the VPC. Let me show you the VPC as well. So this is a VPC which
we have created, and this is a CIDR, which just default CIDR which
we have attached to it. Now, let me show
you one more thing. This is actually they are no
instances running right now, but it will going
to be run as soon as our cluster will
be ready, okay? So it will take a lot of time. So it's better to leave
this stone for awhile, have a cup of tea or coffee,
whatever you like it. And then come to this
place we call it. We're going to take
at least 20 to 25 minutes to set up all of the things that
we're going to create the EC2
instances as well. But it does have a look on this cluster that is still
creating lot of things. And you can see that this stack is include
in progress mode. So if we're going
to take quiet time, so we will going to see the
next part in the next lesson. For now, keep learning, keep exploring, and
stay motivated.
19. Creating Deployment and Service: So welcome back. You can see that our
cluster is now ready. Okay, so after a long time, it created our cluster. And you can see the
message at the bottom that it has successfully
created a cluster. Now it is ready to be used. Okay? We have the node group, we have everything
we can see here. So now let me show you one
thing which is the node, okay? So that's right. Many qs are not many
q, that's right, Cube CTL, then node, get known. It, we're going to
list the nodes. Okay, we don't get things
so much information here, so I'm going to write
hyphen O and the space. Why we simply means do you have are described
output and details. This right, get node hyphen all and white, not wired wide. So it will going to, you can see here we have the name of our
node, the status. And you can see
Endo always image. We have Amazon in extrude and the container which
is running inside this with the darker. So this is how we have
successfully configure our ys chaos cluster
on an Apple machine. Okay? Now this is our Elastic
Kubernetes Service. And here we have completed
so many steps you can see we have firstly created
the CloudFormation stack. Then we created the that not to directly configure
the ECS, the AKS cluster. And now we are moving
ahead to deploy our application into
this EKS cluster. Okay, so these are, you can
see inside the cluster, inside this networking which we created a subnet which we said. And the security groups, those are the things
which we did. You can see here now all of
the things are now ready. So now I'm going to
create the deployment, okay, the deployment
object of Q and 80s. So I'm going to
create a new file and destroyed deployment YAML. Meanwhile, we need to create the service dot YAML as well. Okay? Now this deployment YAML, it's just similar
to the ReplicaSet dot YAML as the only thing is that you can do quick update and easily roll
back with this deployment. That's it. All of the things which comes
under these replicas cert that features are available
in deployment as well, because this replica set
is a subset of deployment. Okay? Now, I've created the
deployment of the animal. Now we need to create a
service dot YAML file as well, so that we can easily
export our application. Okay, let us write API version, which is V1 here, then the
kind which is service here. Then inside the metadata you need to give the name
of your service. I'm just going to write my
first four. That's SVC. And here you need to give the specification
or the service, that type of your service, which is node port
here again, okay? And the port number as well, you need to give you selector. And I'm going to attach this label as
well with this service. This labels have
lots of advantages. I told you earlier that you
can manage lots of resources, objects, buys, controlling this label,
that particular label. Okay? Now here I need to
specify the port. Port is one in this
particular port number, our application, we're going
to be less than, okay. So you need to put
5,001 up there. And and that's it. Don't think too that we
need to add something here. I'm going to put the
same port here as well. Sorry, in the NodePort, you need to give any
ID inside that port. You need to give the specified port number and
the target report. You can give any port number. Okay. You can also give
ADT or I'm going to take a same vitals and one here. Okay, so this is the YAML
file for our service line. The next part we're
going to apply a deployment YAML as well
as the service YAML there. So I'll keep learning, keep exploring and
see motivates.
20. Accessing application: Hi and welcome back friend. In my previous lesson,
I have created the YAML file for
deployment and services. Now, you can see that we
didn't reply at this thing. Okay. You can see the Cube CTL get we'll have a look at water
things has been created. And you can see that we have
only one services right now. Now we need to apply this
particular YAML file, that deployment dot YAML file, which we're going to create the replica set and the parts
and the services as well. So you can see we have deployed that
deployment YAML file. Now let us check. So you can see it is now
in ready state right now. But the requirement is three. You can see 0 by three
strike brought here. And if we're going
to show you that Real-time the
deployment process, that how many deployment
is already okay. And you can see that
it is now ready. Let me clear up this screen. You can, or you can use
another tab as well. This White Cube CTL get all. Previously we just
have the services. Now we have some more things. Like you can see three parts
are running right now. And there's one deployment and one replica set is there so that you can maintain the
number of replicas, okay? In this way, I have. You created the deployment. Okay. Now I'm going to deployment
services as well. The plan services you
need to write Cube CTL, create F and F, and just write down the YAML
file for your services, which is services.js on
when you just check it out, it everything is fine or not, and I think everything's fine. Okay, just write down
that services dot YAML. And it will going to create
the services for you. Just write down QC tool, SVC, which is a short
form for service. And you can see it is created. So this is very simple. You can see the cluster
IP through which we can access the running part inside that container
using the services. Now and this is my AKS cluster. And here you can see
inside the contributions, all the things are there. Now we need to access that
particular application. You need to open your EKS, your EC2 instances, okay? So they are to the worker node, is there the Muslim or is there? So let me open any
of them. Okay. Let me open this public worker. You can see these are, the instances are running. So this is the public worker. And here it is in running state. You can see it going
into public IP to which we are going to access
our application. Now, this is the IP
which I'm going to use and put down
that particular IP. And you need to put the port number as
well, which is this. And we need to put that
particular portfolio. And you can see that we, we are able to access
our application. So in this way, you can create
the Kubernetes cluster, apply your parts, ReplicaSet, and services as
well. That's all. Keep learning, keep exploring,
and stay motivated.