Transcripts
1. Course Introduction: Hi, guys. Welcome back
to the next video. In this video, I'm going
to introduce myself, and then I'm going to go with understanding or making
you familiar with the concepts of the topics which you have chosen and the
training you have chosen, and then we're going to see the structure of the training, we're going to see what who is the actual people who
going to take this class, and we're going to
see some highlights or values of this class as well. Pretty much we'll start
with my introduction. This is VancaTrama here. I have 20 years of experience in middleware and in cloud
technology as well. Last 14 years I'm working on Cloud technology and
this Cloud technology I'm covering on
this video is AWS. Now, AWS, this training is going to be all about
working with AWS, understanding the
concepts of AWS. And, you know, working with
real time knowledge on AWS rather than just
an architecture or just have an idea of it. So here I have given you complete hands on
training on AWS, and here you can actually
get the knowledge of the basic AWS to the advanced concepts
of AWS, with hands on. Where I'll talk about a
service and then I run through the service and show you a
proper demo about a service, which I'm talking about
in those AWS classes. This is going to be the structure of the
training, and here, this training is
intended for anyone who is new to AWS
or who wants to learn AWS because we are going to start
from the basic and if you already has a basic knowledge and you are looking for
advanced knowledge, you can also directly skip some classes in the
beginning so that you already know the concepts of AWS so that you can jump onto
the advanced section of it. Now, the values of this
class is basically, you know, as I told you, you will gather a very
good experience on AWS from beginning to the expert
level and the projects will be created later
point of time for you to work out on the
projects and then to get the real time values and information about the AWS items. The next one the next video would be about getting
you started on AWS. I'm really excited to
start the training on AWS. I'll see you on the
next video. Take care.
2. Topic 1 - Introduction To AWS Part 1: Hey, guys, welcome back
to the next video. In this video, I'm going to
talk about cloud computing. But then before we
start this topic, let's just think about, I mean, ourself, when we talk
about computers. What is the first thing
that comes to you when you talk about computers,
your laptops, right? You mobile phones, your iPad, and, you know,
tablets and stuff? So these are form of a computer. So when you decide that
you want a computer, like your laptop or something, you go to a shop
and you tell them, what kind of a laptop
you need, right? So you tell them, I want
to buy a laptop for gaming purpose or I want to buy laptop because
I travel a lot, and I want to use
it while I travel, so it should be like having
a long battery life. It should have good
network connectivity, like Wi Fi and stuff. It's going to have like
one TV of storage. Uh, it needs to have IFive processor because I don't
want to go for I seven or I nine processor in my laptop because that's going
to be too expensive. So I go for I five processor, and then I need Windows
operating system. Likewise, you give specifications
and you buy a laptop, which suits your
budget, isn't it? And also, it suits
your requirements. So this is basically what we
call it as a physical uh, you know, physical asset. So this laptop is a physical one which is occupying a
space in your room, and it occupies that space, and you can access it by
going there physically and typing the laptop in the laptop keyboard or if
you're using external keyboard. So that's basically something
called a physical asset, and that exists on your
real life, isn't it? Now, when you talk
about enterprise, when you talk about company,
you have employees. And these employees will
use system within it. So there are two
kinds of systems. Let's talk about it. I'm not talking about physical
laptops or stuff. I'm talking about two
types of interfaces. One interface is accessed
by your employee, and that interface
will go to a tool where the employee will
access his Providence fund. You will access his
salary, payslip and stuff. He will apply for leave, HRMS, and all those other stuff, like payroll management system. So that kind of a interface that employee will have the communication with
the employee, right? So, um, and there is another type of interface with the customer
interface, right? So the customer is
accessing your product. Now, how you want to project
yourself to the customer. You will have a website, right? And that website will have information about
your product where a customer goes in and purchase that product from your website. So there are the two different
types of interfaces. Now, when you have such company and you run this kind of
two different interface, one for your employee, one for, you know, your customers, where are
you going to host these? You're going to host
this as part of a system, correct? A computer. So the same thing, what
the company does is, like, rather than you go to
a laptop dealer like Apple or HB or whatever
the dealer you go to you when you are a gamer or when you
want to buy a laptop, the company goes to Dell, HP or some kind of Apple itself. So they manufacture server. IBM, they manufacture
a service, right? So they go to these manufacturers
and ask them, like, I want to host this
employee base, and I have thousands
of employees. I want to, like, have, like, infrastructure build,
a data center build, and I want to, like,
have service in them. And that will actually host
my employee related data. And I want to keep it
secure because I don't want anyone else to access
my employee data. I only want to access that. So what do you do you build
this on prem database, I'm sorry, on prem Data Center. Now, in this on
prem data center, you're going to have
let's go from the bottom. You're going to have
networking interfaces. You're going to have storage
built to store data. Servers, which is going
to have the processing, which is going to connect
with a storage device, if it's external storage device, just going to connect
with your network again. It's going to have an
operating system, right? It's going to have some
middleware components, a database component, your customer data, or
your employee data, so you're going to differentiate these two different
types of data. And then you're going to have
this customer application or your employee
application, right? So these are things which
you manage as a company. Because you are going
to manage this data. So either your customer data or your employee data will
be managed by you. But for managing that, you need a database, middleware, right? And to host this
database and middleware, you need to have
operating system which understands what is
middleware and database, right? And the operating system
needs to run on server, which has CPU, memory, hard disk drive for the
initial operating system, and then components to connect
with the network cable, components to connect
to your storage, external storage devices, right? So it needs to have such items
for it to work, isn't it? So that's where
the company needs on prem database, I'm
sorry, data center. Now, when you have
the data center, there is a lot of overheads. Like, for example,
when you buy a laptop, in that case, what happens
if you buy a laptop, you need to have a place
for you to keep a laptop. So you need to buy a
desk for it, isn't it? You cannot keep your
laptop always on your lap. Your lab will get hot, isn't it? Because when you
process a lot of data, it's gonna get really hot. So you need to buy a desk, and you need to put your
laptop on top of the desk and then provide the wiring
connection for your power cable, and then provide Wi
Fi for the Internet. And then you need to if
you want external monitor, you need to have this extra
cable connected to your HDMI. And then if you need the mouse
and keyboard to be handy, and you want external
mouse and keyboard, then you need to
connect to Bluetooth. So these are something
which is required for you. These are some things
which you need to, you know, accommodate
or allocate. Just buying a
laptop doesn't make any sense unless until you have these equipments
with you, isn't it? So the same thing
for a data center, you need to have space
for the data center. Like, for example, a
data center can be big, can be small as well. So it can be within a
company's premises as well until it is secured from other people's access
direct access. So you need to create a room. You need to give space
for the data center. You need to buy in the
caches, like the Chassis, sorry, where you put
in the service there, you need to buy the
network cables. You need to buy switches
for your network, right? So, likewise, you need
to make arrangements, and you need to keep some
server administrators. Like, you need to hire some
people to manage the service. If there's a power failure, you need to have
power backup, right? So, likewise, you need
to not just buy service, but then you need to
buy infrastructure. So that's where AWS is
coming into the picture. AW is going to release some of the workload which
you have over there. Like, what kind of workload you going to get
released from it? So AW has different
services it can offer. So it has this AWS
EC two instance. That's the first
thing which came in. It's called infrastructure
as a service where AWS will take care
of the data center space, the renting of it, wirings, administrations and other stuff. So what they will give is like, they will give you network.
They will give you storage. They will give you
a server to host. Now, what do you need to
bring along with yourself? You need to either if you already have an
operating system license with you, you can bring that. If you're using open
source operating system, you don't really
have to do that. Okay? So if you're using, like, a Ubundu which is open source, you can just go ahead with
that without paying anything. Or you go to AWS and tell them that I don't have
an existing license with any of the
operating system, but I want to use Windows for
me or red Enterprise next. When you say the
word enterprise, that means there's
a charge for it. So what AWS would give you is like they will install
the operating system, but they will charge
you for the enterprise. Whatever they are charging, they are being charged
by the red hat. So they will charge
you indirectly, and it'll be part of
your billing process. And then you need to bring your own license for your
middleware database, customer data, and
customer application. So what you need
to manage is less, yeah, when you compare with what you had to manage earlier. But it comes with
a disadvantage. Let me just quickly tell
you the disadvantage. You don't own the service. That's one way good where you don't have to buy the
service by giving, like, a lot of money or you don't have to pay rent
for the data center. So that's one good thing,
but the problem is, like, this service can be
accessed anytime by the AAWSoP who's inside that. But normally they will be
binded to the agreement. They will not do
anything illegal there. But still, it's not
under your supervision, so anything can happen. So that's something you
should be aware of. So a lot of banking stuff, they don't put their actual private data in a data center, sorry, in a public
public hosting cloud like AWS or Azore. So either they will
take a private so they will have the
dedicated host for it. There's an option for it because all the service which
you're going to use, right? All these services which you're
going to use, by default, it is shared unless and until you ask them for
a dedicated host, which means that
one particular host will be dedicated
for your operation. So we'll talk about that later. It's like it's not
like a simple concept, which are really
advanced concept. But there is a way to do that. So AWS gives you a lot
of options like that. But still, the server is located in a third
party vendor site, which is AWS, which
is Amazon, right? So there are disadvantage, but there are these advantages. You don't have to
pay for a space. You don't have to buy equipment. You don't have to have
the overhead charge of having an employee
to manage this service. So you don't have those
overheads as well. So the money is actually saved. And if you're using this for
a trial and error purpose, so you can use it for an hour and then
turn off the server, and it will not be
charged for you. So all these operations will be turned off and you will
not be charged for it. So there is a good
advantage of that also. And there's also
another advantage. Like, what if I want to take the service
and pay in advance. So AWS gives you a nice offer if you're paying in advance. Like, for example, I want to take the service
for three years. Okay, you pay me
three years upfront, and I will give you
a very good price. So it'll be much cheaper
than the on demand price. So that's very good,
right? So you kind of get deals based on the
regions you're selecting, based on the location,
you'll get deals as well. So a lot of advantages are there in infrastructure
as a service. Interesting to know
about this, right? So on the next video,
we will continue this, and we will talk about
platform as a service, software as a service,
and storage as a service. But I hope that you
understood so forth that how an on premises data center is different from
a cloud computing. Right? Now, why they say Cloud, it's because it's in the Cloud. You don't actually
manage any of those. So it's actually in the cloud. So you need Internet
all the time. But if you are an on prem, you don't need to have Internet. You can directly go to
the data center site, connect the cable, and you can actually access the
item using a monitor. But unfortunately, you cannot do it with a cloud
because it's in the cloud. So you need to have Internet to connect to the Cloud and access the data
from the Internet. So that's the only issue. But don't worry about it.
AWS is basically HTTPS. So all the transaction
between yourself and the Cloud provider is all secured through the certificate, which is using TLS certificates. Much more detail on
the next video, guys. I'll see you in the
next one. Hey, guys, welcome back to the next video. This video, we are going to talk about the continuation to where we left off on
the service models of the cloud computing. Now, we have discussed
about on prem enough. We have also discussed about infrastructure as
a service where infrastructure is let out by a service provider
like Amazon, Google, or you can
say, Microsoft Aaball. There are other
providers as well. I'm not mentioning that doesn't mean that
they don't exist. There are so many
providers right now who is providing their
infrastructure as a service, and all you have to pay for them is the infrastructure cost, and it could be an
hourly basis or it could be like set up
on agreed time frame, like one to three years. Now, AWS offers you such
infrastructure as a service, but AWS does offer you even
better than that, which is, um which is platform
as a service. Now, for platform as a service, as you can see over here, AWS provides you till the middleware on the
database level, okay? So you only have to bring in your customer data or
your employee data. And then put it on an
existing application which can run on a specific, you know, code line, okay. And basically, you
can use your data, and that can be run using Amazon EKS is one
of the examples. There are so many other
platform offerings from AWS. EK is one of the good example
where EKS is, you know, Elastic Kubernetes
service, and it provides you the Kubernetes
service for your application. So all you have to bring is
in your application code, and then it can run on EKS. An ECS is fully managed service, so which means you
don't really have to configure overhead of an administrator needing
to configure the EK, the KubinT side of things, because that will be
managed by a console. So this kind of platform as a service is also
being offered to you, and there are so
many other things which we will
discuss later about. In terms of software
as a service, software as a service is
a very good example is your Google Sheet is a very good example
software as a service where you actually
don't own any of the logics behind having
this Google Sheet running. But then when you go
to sheet.google.com, you will get the Google
Sheet, Google spreadsheet. And all you have to do
is, like, enter the data, save the data on Google, and then pretty much
that's, you know, that's your software
as a service, which means that you don't
really have to indulge in any kind of coding or any kind of developer
requirements, nothing is there. So you as an organization
doesn't have to buy a subscription which needs to be installed
on a system as well. Like, for example,
if you compare Google Sheet with
Microsoft Excel, right? Microsoft Excel right now
has a web version of it, like software service
version of it. But before that, you had to download and
install it on a system, which means that you need to own a computer and you need to have that physically
installed on your system. But right now, you don't really have to do that
with Google Sheet. Why? Because you can go
to Internet Center or browsing center and you can
actually opensettoggle.com, login with your ID, do
your Excel Sheet work. And then Save and quit it, and you don't have
to even install anything on that system. So you can use it
like temporary basis and then get your work done, and then pack it
up and leave it. The same thing as well for
your storage service, right? So, for example,
in this situation, so there is one
predefined software which is let out for
a subscription basis where you can subscribe for that software by
paying some money, and then you can access it from anywhere around the world, if you just have a browser. And the same thing for
the storage as well. So you go for a
subscription basis, and then all you will do
is access the storage on Google Drive or Dropbox or any kind of
storage as a service. S three also is one of those services from
Amazon, which will be, which can be given as an
example at this point of time. And you can upload, download stuff from
anywhere around the world, give access to anyone viewing access or
something like that, and then you can
just share the link, and anyone around the
world can download that. But you still have to pay a subscription fees for
that particular service, and that particular
service is from the Cloud. So best example is Google Drive. So I always use Google Drive for sharing my documents and
stuff like that. You can also think Git Hub
also has the same method where we can actually store our
codes and archives in GitHub. And that can be shared as well. So GitHub can also be
considered as us it's a kind of hybrid
between software as a service and storage service because it can be used as booth. So, likewise, you have
so many live examples around the world right
now over the Internet, and you can find one of this, at least, which you are
actually using it daily. So these are something which is Cloud enabled service models, and these models
actually gives you a complete representation
of what is the capability of
cloud based systems. I hope you understood this. So this is going to
be very useful for you when you start working with Cloud base so that
you can actually differentiate which is in
the cloud and which is not. Okay? So I think
that by this time, you should understand what is in the cloud and what is not. And just try to
understand the things, the items which you are using currently in
your daily life, see which ones are involved
in Cloud and which one isn't so that you can actually understand the
difference between that. So the best way is to try that yourself and try to
understand. All right. Thank you again for watching
this video another video. We will see some more examples with this kind of
live examples, yeah. Hey, guys, book back
in the next video. In this video, we
are going to see the advantages of
using Cloud computing. Now, as you can
see on the image, cloud computing
is in the center, and surrounding that, you see a lot of things
going around that. One of the things is
flexibility and agility, so it's very important that You know, a cloud provider
is flexible and agile. Now, what do you
mean by flexible? So whenever I want, I can stop the instance,
and whenever I want, I want to start the instance, either through scripting, command line or
through the console. So flexibility gives me when I want it and
when I need it, I have that extra power. Now, what is agility? Agility helps me to, you know, run through
my deployments quickly. Like, for example, when I say deployment,
what does it mean? Deployment meaning that I am deploying a server when
there is a requirement. And that needs to
be created fast. So flexibility allows me to increase or decrease the
capacity or scaling it. But agility helps
me to do it faster. So, I will show you in
the hands on how much time it takes to create
and run an instance, okay? Creating an instance can be manual and may take
a lot of time, but an automated way of
creating an instance, like when I'm using
ubunts in Bus. I would want to
create an instance instantly as soon as I see
a high load on the system. That's how auto
skiing works, right? When I see a high
load on a system, I immediately want to
spin up more instances, right, to handle the request. So agility helps over
here for me to create more instances
immediately whenever there is a requirement
for me to do so. Now in terms of innovation, it lets me add new services. AWS keeps on adding more
services, innovate itself. And so you can see that we spoke about the
interaction training. We saw a new certificate
for AI and mision learning. So that kind of innovation
is what I'm talking about. So a cloud partner should never be compromised in
terms of innovation. He should also always
be innovating, adding the new latest
trend in the market. Current trend at
this time is AI. Next one is cost effective. So there are so many
customers who ask me, how can we make this
project cost effective? So as a cloud engineer or
infrastructure engineer, you need to think of a solution
for cost effectiveness. AWS helps you in terms
of reducing the cost. When you compare the
cost and the energy spending your on
premises data center, and you compare that
with the Cloud, it is going to be
more much simpler because you will only
pay for what you use. Right? If there is no
requirement for you to use, then you're not
going to pay for it. So as an infrastructure
engineer or the cloud engineer, you need to determine how much number of
replicas or the number of service should be running at a point of time when
the load is less. So in that way, you can reduce the cost to the company
in which turn you can actually bring more
profit to your organization. So we are going to discuss a lot about the cost
effective methods and strategies later point of
time in terms of scalability, we already spoke on the
flexibility part of it. Scalable application is one of the most topmost requirements of any cloud computing services. Now, scalability comes
from with an AWS, and AWS also supports a third
party scalability, also. So if I'm deploying cobert
separately from, you know, separately from the inbuilt
or default uberts engine, which is given by AWS, still I can scale that
application using, you know, AWS services. So even though I'm not using the inbuilt cobert services EKS, I can still scale my application by using the Cubs
sorry, AW standards. In terms of security, Um, AWS gives you a lot of different ways of
securing your console, your access to CLI, your access to your Cloud APIs, and your access to
the service, right? And server to server
access as well. It is very much secured. The services within the AWS is also using secure protocol
to communicate with each other so that there would be no hacks and attacks
on AWS services. The next one is reliability. Reliability, when I
talk about reliability, the topic comes to my
mind is regions and ***. Now, by default, any region will have at
least two minimum As, which means it will have two
minimum availability zone. So which means that your data is replicated across
these two zones, but the major, major, you know, majorly used regions has at least three or more
availability zone, which gives a lot of bandwidth
for your reliability here. In terms of developer
friendly, the console, which you're going to see is so developer friendly that you are going to have a good
time around the console. So later point of time, we
will look at the reliability. Whatever we spoke about, we will be actually
looking at it one by one. So don't worry about it. I'm not covering those in detail, but this is just an overview. In terms of hybrid
cloud capabilities, a lot of, you know, innovation has been
done in terms of AWS to give you that
hybrid Cloud workaround. What is Hybrid Cloud having working with your data center
and as well as with AWS? So that is possible. So you can have two data, one secure data on your
Onpromiss data center, and another half is a
public data which can be accessed by everyone that
can be hosted on AWS. So Hybrid Cloud is completely supported in
terms of working with AWS. And then the important thing is the ecosystem
and the community, the number of AMIs, which is called Amazon
Machine Image and the marketplace
which you have and the community support for the marketplace and
the documentation. And people using, you know, the AWS across the world
gives a lot of plenty of YouTube videos and documents and everything else where
people are using this. And you can actually
copy, you know, their success formula to your company and make
your company successful. So this kind of a
community is what, you know, builds a
stronger software. And this AWS is going
to give you that. Right. So these are the
advantages of using Cloud company sorry cloud
computing, especially AWS. Thank you for
watching this video. I hope that this video was
easy for you to understand. If you have any questions, suggestions, please leave
it on the pin section. Your review means everything. So make sure that
you leave a review, and if you are not satisfied
with the training, you can say that on the review. Rather than just rating
it, just say that. What is that you
need to improve? Because I am here to innovate. I'm here to take the training to the next level. Thank you again.
3. Topic 1 Introduction To AWS Part 2: Hi, guys. Welcome back
from the next video. This video, we are going to talk about the other Cloud
providers out there. So now you know that AW is
one of the Cloud providers. There are other providers
out there as well. So like, you have
Microsoft Azore, you have e Cloud from Appo. You have Citrix, you have SAP. You have A&E Cloud. You have IBM Cloud. You have Red Monk,
Amazon web services. Dell has its own Cloud service, HP has its own Cloud services. Likewise, every company around the world comes up with
their own Cloud services and offer customer
with wide range of offers and attractive, you know, free, you know, free services which can be used for training and learning and getting used to
their services also. Um, I've seen Google
Cloud is giving in, like, a lot of discounts
for corporates, like if you register with your, you know, corporate
email address. So if they want you to
experience their cloud. That's the reason, main
reason why they are asking you to experience
in their Cloud. And Azore also does
the same thing, and AWS, as you know, that we already have a free account going to be
created later point of time. So likewise, you have so
many Cloud providers. But if you see the
major Cloud providers, which is out there is AWS, of course, they have, 34 percentage of market share, according to this is
2020 two's report, but then this is almost a
similar case even now as WOW. So AWS is still a market leader. But Azore from Microsoft, is trying to get to
their, you know, strength because they are
giving in they're giving a lot of offers to get
a lot of customers in and they have
so many solutions, just like how AWS
has services, right? So they also have similar
kind of services, and they have documentation. Like, they are creating a lot of YouTube documentation
so that people are familiar with their cloud cloud, you know, enhancements as well. And then you have
Google Cloud here. Google Cloud is also
getting famous, and they are also, you know,
getting more customers. Majorly, whoever is against
Microsoft Office, right? They go for Google EnsuT system like Google Sheets and Google
Word and everything else. So with their product, they are also trying to
sell those customers, the Google Cloud, as well. So my last project, we had Google's Google
Sheet and everything. We are against Microsoft Office. Um, so what Google did that is like when we wanted to
modernize our environment, so we went with Google, rather than Microsoft
Azore or Amazon AWS, because we already had
a good contract with Google because we were
using their n suit system. And they have actually given a lot of discount
because we are, you know, not switching to Azore or AWS. We are
switching to them. So that was a real good deal which we took for the customer. And Google is now the
previous projects, you know, whole ID provider
for the Google Cloud. So, likewise, if you have existing ties with
existing vendor, so there's a good
chance that they may give you a good discount
or good bargain on their own cloud rather than switching to
a new cloud provider. Now, do remember
that this is 20221. It was 203000000000 market here. Now, 2024, it's going
to be much bigger. I will try to get you later
point of time, you know, some statistics related to 2024, how it looks like
later point of time. But then, you know, this is something which
doesn't change so much in terms of the occupation, so still the AWS is still the leader in terms of
cloud service providing. Sorry about the
background noise, if you can hear something
something I cannot control. Again, this is what I
wanted to tell you in this video to show
you that there are different cloud
providers out there, but we chose AWS
because AWS is one of the leaders even now
in 2024 as well. Thank you again for
this time and patience. If you have any questions, leave it on the questions section. I'll see you in the next video. Guys, welcome back
to the next video. In this video, we are going
to talk about regions and AZ. So we will first
concentrate on regions, and then we will go
to available T zones. Okay, because those are
related to each other. Now, before we go ahead and
understand about regions, and you can just talk about
a real time scenario here, which is much, much helpful for you to understand
about that concept. Now, let's start with
our own company. So let's just say that we
have a company of ours. And what we do is that we
do ecommerce business. And I have my customers
around the UK. So my product gets
sold around the UK, majorly London and Manchester. Let's just say, for
example, these are two locations where I get
most of my customers from. Now, I have a office in
London for an example, and I have my data
center on London itself. Now, there was, like, the
severe storm in London, and I have this power outage, and it's been like, you know, for a couple of hours,
more than four or 5 hours, my backup even ran out. So my company website
completely shut down and no user
from Manchester or, you know, from Scotland
or from Ireland. They're not able to
access my website itself. So all around the world, my website is completely down because the service
which was actually hosting on my data center
has been out of power. So you get it, right? So now, after the
power comes back, I connect with my admins, and I try to understand what was the reason why my website
was not available. And the reason was quite clear
like at daylight, right? Because the power
wasn't there and the backup power even ran
out. I had this problem. There are two solutions, yeah, to be very specific, right? So one is, like, I can
add more backup powers, and that's gonna cost me a lot. But that's just
redundancy, right? That if there is
no power problem, some hack happens
or some kind of infiltrator comes
into my company and then shuts down my server, then that's another issue. So the power adding more power doesn't give me an
actual solution. So what I decide is that I will create another zone
in Manchester, which is, you know, my second place over there because my huge
customer base is there. So I create another
zone there so that both the servers can communicate to each
other and work on a load balancing mechanism
and which, you know, in a disaster
sequence on London, there will be services
hosted on Manchester, which actually can take
care of my website, so it won't be down permanently. So you understand that, right? So this is the example
I want to focus on the AWS concept over here. Now, regions, nothing but the actual physical
locations around the world, and AZs or availability zones are the data centers on that, you know, on that location
on the physical location. For example, take US
East for an example. So there is a region
called US East. I've just put this
across for you guys. So these are regions, yeah. So US East is basically North
Virginia, United States. UEst is basically Ireland. AP Southeast, one is Singapore. So, likewise, each
region has a name a name which is decided by AWS
by having kind of like, you know, the continent
name and then which side it is in
the west side or east side or central side or
southeast side like Pie. So they decide that,
and then they give one, two, as in regions. So they can if a particular region has
multiple regions in it, so it'll be like
one, two, like that. But so far, it's been one
what I've seen so far. So but within the region, you will have AZs. Now, these are the data centers. So is region a data center
or AZ a data center? AZs are the data centers. AZs are basically called
as AvailabT zone, which means that they
are a data center. Now, the largest group of data center is available in
EU East because a lot of people host their application
on EU East because a lot of people are actually in this
zone in this New York, you know, that's Chicago, all those things comes
into this particular zone. So that's the reason they have six AZs which means
six data centers. Now, you can go ahead to Amazon website and you can
actually have a look at this. So you go and say AWS and
you can say, like regions. And you will actually get
the list of regions of AWS you going to have over here you can see the global
infrastructure over there, and you will be able
to see the list of regions in AZs which
is part of the world. Like, you can see that
they themselves have divided this into North America, South America, Europe,
East like that. You can also get the
whole world view as well. I'm not sure where, but it's supposed to be
here somewhere, yeah. So when you just click on that,
you can actually see that Entire regions over here. So you have 34 regions which has multiple
availability zones, which is your data center. And then CloudFront,
we will talk about this completely on a
different session. So they have divided this
into multiple regions, as you can see over here, and each is, like, this is North America, this is South America, and
this is Europe. And then this is Asia. So, likewise, they have
divided this region. And, you know, each location has one or
more regions over here. So you can see the date
it has been launched. Like, it's launched on 2009. So you can see that
there's one on Ireland. Which is launched on 2007, and you one on the
London region. And it has three
availability zones. That means that there
is three data center. Now, not much detail
about it, like, where exactly is the location of these availability zones that
is kept with ABS itself, but then they tell you how many availability
zones are there, so that your data will be replicated or cross
multi availability zone. So if there is a
disaster or one AZ, that will not affect
another AZ because it is separated from with
a distance, okay? So the disaster wouldn't
happen everywhere. So that's the reason for it. So there are these red
colors which says, like, you know, coming
soon as in those regions. So I can see that the major
cities has a region to it, and each region will have at least two
availlability zone. I've seen two availability
zones on most of the regions. So this is pretty much what I wanted to tell you
on this video. And this is pretty much what region and
availability zone is. I hope that you understood the real world example as well. We will not go into
so much detail about this at this moment
because we're just starting things off. So right now, I just
want you to have a very broad view on what
are regions, what are ACs. That's all is needed right now. Thank you again
for your patience and time. I'll see
you in the next one. Hey, guys, welcome back
to the next video. In this video, we
are going to quickly understand about the services. Now, these are some of the
important things in AWS, which is something
you will learn. And each of these
services has different, you know, meaning
to it and as well as different reasons
why it exists. Now, let's just
talk about services on a whole in a
real world example, and then we will compare
this with actual services. Now, for example, if you
talk about services, like, for example, in
your mobile phone, so your mobile phone has camera. Your mobile phone has Bluetooth. Your mobile phone has the
phone the calling purpose, right, the GSM or CDMA. So that's the whole reason
of mobile phone, right? It has it has it
can play videos. I can play audios. But how can you even
access those features? If you want camera on my phone, like back facing or
friend facing camera, how do you work with it? How do you think about it, and then it opens
up? No, it doesn't? So you have applications for
each of those, isn't it? So you have application
for camera. You have application for phone, which actually goes and
dials and then uses the CDMA or whatever the GSM
technology it uses. You have app to enable
Bluetooth, right? That's when it connects
to something, right? So these are apps for
specific purposes, isn't it? The same thing on AWS
is these services. Now if you see a mobile
phone evolution, the camera, the phone, the videos, it didn't all
come in one time, right? If you go ten years back, you would be using a flip phone, which will be used majorly for, you know, connecting
with another person over a phone call, right? But nothing else you can
do with a mobile phone. But then the addons, right? These are something like
addons which keeps coming. You get a new version of camera, you get more mega pixel stuff. So as in when there is a
new advanced in technology, you kind of have that
added over time, the same with the
services as well. So you will see more services getting added over
time into AWS. AWS came up with, like, 100 services before. Now it has more than
400 to 500 services, which is used for
multiple reasons. And these are services which
is added on over the time. And then you kind of see these
many services popping up. And these services has different reason for its existence. Though we may not be covering
the entire services, you know, estate of AW. Then we will be touching base
upon some of the services, looking at them in very
much overview of SOT. We will understand most of
the services by its name, will understand what
their purpose are. And we will try to I will try to give you some real world
examples so that you can really associate the name of the services and what
they actually perform on a real time scenario
so that you can answer those questions
whenever that question pops up on your exam. Now, do remember that for
our exam, Uh, that just, it said in this document itself, as you can see in
the Appendix A, so it says technology
and concepts. These are technologies and concepts that appear
on your exam. So these are some of
them are services. Most of them are
services, actually. So you can see that the
regions and AZs which we just, you know, did the
overview sometime ago. And there is this
migration of Cloud, and you have network services. You have just so many
services over here, which is going to be
a part of your exam. So you need to know
the differences between these services and
what actually they do. That's all they require
at this point of time because this is a foundational
level certification. They don't want you to know the details or how
to operate them or, you know, some advanced stuff. They don't want you to
know they want you to know some the basic
stuffs about it, so that they'll be like, Okay, this person is familiar
with these services, so I'm going to, like, he
has chosen the right answer. So it won't be complicated. I have seen questions
for professional course, associate level courses, and specialist level
courses, you know? Those questions are
so complicated, but it's not so much over here. So you don't really
have to worry about it. I'm going to cover all
these things for you. I'm going to give you
real world examples so that you can understand
this much better. Alright? So these are some services which uses
for different purpose. For example, let's
take 23 services and try to understand the
relevance of that service. EC two instance, it's called Elastic Cloud compute. EC two. Why it says two over here
Because there are two Cs, Elastic Cloud compute. So that's why they
call it EC two. So what is Elastic
Cloud compute? It's going to give
the elasticity two induce compute power
whenever you need it. So that's where your servers are going to be created, okay? So that's your reason
for being in AWS, okay? That's the first
service, which is, like, which is the only
reason you should be in AWS because it gives you
infrastructure as a service. This is where you can
populate servers. Then you have the cost manager. The cost manager, as
the name explains, tells you about the cost. And it also tells
about what has been billed so far and what is
your prediction of billing. So if you're going
to use this or consume this space, right? This is time amount, it's going to be. That's
prediction, right? So it all does start. Billing. What billing does? The name says that that's where
you pay the monthly bill, whatever you have, you know, incurred as a cost with AWS. What is elastic load balancer? Elastic load balancer
gives you load balancing capability
for your application, which is hosted on
AWS. Simple as that. Right? So a lot of services here has the reasoning
in its name itself. So again, this video
is just an eye opener. We're not going to discuss
about any of the services, so you don't really have to have that pain or overhead that Oh, we're gonna discuss
about this service now. We're just going to, you know, go ahead and casually look
at these things and see, you know, what other
services available. So I'm just going to
show you that around. So easy too, when you go
to the homepage, right, you will see a lot
of items here, which is, like, coming up
in a modular fashion, okay? So there is this cost
service coming over here, which tells me what is my forecast or prediction
of this monthly bill, what's going to be
at the month end. And it also tells me, like, what is the current cost, which I have to pay
at this moment. So if I delete all my services, this would be the
amount I have to pay. Okay. So now, it also tells me about some trusted advisors, some items over here. This is all coming from
one service on another. Okay. So we don't want to know some details over
here at this moment. You can also add more widgets. You can also pull in data from different services as well. So those things are
also available. This is your dashboard, okay? This is what you will
see when you, um, Create a free account
and login into AWS. So don't worry about
the free account, so I'm going to tell
you how to create your free account and log in. And I'll make sure that
you don't get billed for, you know, all these
things unless and until you do
something by yourself. But I'm going to give you
all those preparations for you so that you don't get billed and you'll be in the free
limit utilization. But then do remember that there are some items
which we're going to do, which may incur some bill, but you can actually view it rather than
doing it by yourself. So that is something I'm going to tell you. Now,
what are services? Now, these are the services. You can see there are
so many services there. These are just the
opening of it. Each one under this will
have multiple services. If you just click
on all services over here so you can see the number of services you
have here. You see that. There's so many
number of services which AWS uses directly. And there is also
indirect services, okay? So which is like, which is not displayed because it's indirect and it's used
within the service. So there is also
indirect services, which is very much advanced,
so don't worry about it. So what you can do here is like, these are some help, actually, which is given from AWS to quickly sort
things out for you. So if you're here for compute, then there it goes to ECT. But if you just click on it, these are some of the options or the services which is
coming under compute, okay? So, likewise, front
end web development, if you're looking for that, these are the services under it. So these are catalogs, you can say, B which
services are binder into it. Okay? So there may be multiple services which is
related to this catalog. So you can actually select
the catalog and see what is actually inside
this as services, which may be related to
this particular catalog. Alright. So, this is the service explorer that's going to show you the
catalog and the services, and this is the easiest way
of getting any service. For example, you want to raise
a service ticket with AWS. Just say service or ticket
or something like that, and it's going to go to service, you know, incident
manager support. You can see that
pretty much these are services which comes through
when you type tickets. So, likewise, you want
to keep some storage. So you just type
storage over here, and you cannot get
all the services which is related to storage, which is frequently used. So if you ask me, Oh, okay, so what about other
storage services? You can just click on Show
More and you can actually see all the storage services over here because it only shows you, you know, the most used
ones when you type storage. If you're looking for
database over here, so just type database, and you're going to
get the services which is related to
database firstly. And then the features which is, you know, part of services. The features make sure that it's like a
catalog of services. So these are the things
you can do with a service. Okay. So that's the one and then you have resources
where you can actually open resources and read about them and
understand about them. You have documentation which
you can do the same thing. And you have knowledge articles which people ask questions
about a specific, you know, service
and how to do that. And then there'll be
people responding to it, and you will see Amazon's representative respond
to it as well. And then you have marketplace over here where you
can actually see the third party people coming
in from you can see that. This is from data, data Sunrise, you
know, technology. So this is a company which
sells this kind of a solution. So you can see
marketplace as well. You will see Oracle
oriented solutions as well. So when you just look
for the marketplace, a lot of people sell stuff for you guys, okay,
to work with it. You have blog post, of course, where people write blogs
about their experience while working with that
particular service or particular database migration
or something like that. And then you have events. Events are very important because sometimes
the service will be under maintenance
because there are some people doing some
work around it on that. So those will come up, pop up over here for events, and then you have tutorials. Tutorials are the best thing. So it's just like
a documentation, but it is step by step towards
achieving a specific goal. So, for example, creating and managing non relational
database over here. How to do that, it has a step by step how to work with that. So, likewise, the search is filled with a lot of
useful information. It's just that you
have to think about all these available options to you when you are on the
console. The options are there. You just have to think about it. Which option should I go for? Alright? Now, you have this console home
service over here, so you can also add
items like bookmark them over here so you can add
this EC two service. I have done that because I host a lot of
trainings for that. So I have certain services
running over here, which is not
necessarily for you. It would be there. It would
be a clean one for you guys. But then I do something
on training purpose, so I get all these things. So majorly, I'm on the region over here,
so this is the region. So US East North Virginia. Now, I am currently
located in the UK, but why would I select
something like North Virginia, which is, like, a little
far away from me, isn't it? Because it's the cheapest region where I can host service for, like, really less price when
compared with, you know, hosting it on London, because London, as you
know, is like, you know, the UK is like island, and the data center
space is very less. So the the cost
is higher for me. So that's why I choose
North Virginia. So there are strategical
things which you can actually use here. That's what we're
going to discuss while going further
in this program, as well as on the future
programs as well. So I'm going to share with you how we will do cost cutting, how we will save money as well when we start hosting
machines like this. Like these are like, larger
instances of m5x large. So how are we going
to save money on that those things also
I'm going to discuss. So this is not just
one of training, and then I'll leave
you nothing like that. So these trainings are
going to be followed my more additional
training because I have the experience of working
with multiple customers, and I know the way
they work around with the customer care people
from AW Azore as well. So I'm going to tell
you and share you all those interesting things and make it very interesting
for you going forward. Thank you again for
your time and patience. I'll see you on the next video. I guys we'll go back
to the next video. In this video, we are
going to identify AWS support options
for AWS customer. So we are going to talk about
multiple options over here or multiple services.
Sorry about that. So you're going to talk about multiple
services over here, which is towards the AWS
customer support option. So now let's go ahead to
our presentation over here. So AWS support option comes with customer service
and communities. Now, here in the customer
service and communities, so we talk about providing basic support for your
all AWS customer. Doesn't matter whether you
are a paid customer or not. It provides you accounts,
billing assistance. It provides you access
to AWS community forums, resources like AWS repost
for community based support. Now it is suitable for account related questions and connecting with
other AWS user for, you know, general guidance. The next one is about
the developer support. The developer support over here, the support option is
going to be about, you know, technical support for developers building on AWS. So, it also offers guidance
on best practices, troubleshooting and
usage of AWS services. It includes support
on primary contact. Sorry, it includes for
one primary contact, which means that
that primary contact can raise support questions. Here we are going to access Trusted Advisor as a best
practice recommendation, which is limited, not fully extensively available
for a development user. Now, in terms of the next one, which is over here, I'm sorry, I got
distracted by a call. So the next one is about
the business support. In terms of business
support, it is a paid one. Again, just like Developer,
it is also paid. But the common the
basic one over here, which is the customer
service and community, that's open for everyone. You don't need to pay anything. So currently, I am in that support plan, the
basic support plan. So if you're a developer,
you can be in that one, and then if you are
a business user, you can go for the
business support. Now, business support helps you technical support for business running on production
environment. Like if you have
production workloads, you can be on business support. Now, it gives you
24 bar seven access to Cloud support engineers
for technical issues. This is directly from AWS. It gives you full
access to AWS trust Advisor and Personal
Health dashboard. It also supports multiple
multiple contacts, and also you can use it on a use case specific
guidance as well. So the business support
is used for or it is suitable for businesses that requires around the clock support for
production application. But if you're planning
that, you can actually run with
the AW support. It's not possible because the support is
limited over here, even though it's 24 bar seven, but the turnaround time is huge. So we will talk about the
architecture on the next video. The you'll understand the
turnaround time as well. So here, the next one is the AWS Enterprise
on RAMP support. So this provides you
the proactive guidance, technical support, and
operational reviews. It comes at 24 bar
seven access to Cloud support engineers
for quick response time. It also helps enterprise
it also helps enterprise to optimize the
Cloud operation and manages critical Business
workload as well. So the last one is the enterprise support,
the complete support. So this means that it has a
comprehensive support for large enterprise with
mission critical workloads. So it also has a dedicated DAM personnel or TAM as technical or co manager
for personalized support. It also proactive monitoring, architectural guidance, and
optimization recommendation. It also comes with
24 bus seven access to senior Cloud engineers. So it comes with
senior Cloud engineers and faster response time. Again, we will talk about the response time on the next video. So use cases for organization
which is deep into AWS and which needs
technical engagement and high availability for
business critical application. So these are the support
option which caters different business
customers for, you know, basic customer service and
the community support to comprehensive enterprise
level assistance for mission critical workload. So pretty much this is what we want to
cover on this video. Thank you again. I'll see
you in the next video. Hey, guys, welcome back
to the next video. In this video, we are going
to talk about support plan. Now, you must be
actually not aware of the support plan at this
moment because you've never cross a path over here. By default, everyone
gets to the basic one. So then you have to upgrade to, um, develop a
business enterprise. Now, these are like three different charges
like 29 per month, 100 per month, and 15,000 per month. What is support plan? Support plan is where you
get support from AWS. Now, AWS has, like,
different plans, which means that you
are support will also be in a different
situation right now. So if you want to go for a developer plan
with $29 per month, you will get a general guidance support system impaired support that is within 12
hours, 24 hours. So that's the default
one. And then if you're going for
a business plan, you're going to get a production
system impaired down, which is like within 4 hours, you will get a response and
production system down, which is like 1 hour response. Within an hour, you'll get
a response 100 per month. When you go for enterprise
related, you know, support plan, just
paying 15,000 per month, which is very expensive. And you will get a
critical downtime support within 15 minutes. And it also includes
other things as well. So that's the total point of having this explanation
in this way. Now, what other things
are you going to get? Not much for developers. Developers are left
alone in the water because the general
support itself is, like, more than enough. And when you go for a business, you will get contextual
guidance based on the use case in terms of how you can improve
yourself on AWS. And then a TAM team. TAM is like technical account
manager would be there. So who can help you guide you to get your services
and help you. So there's a team
there for you guys. Okay, management
team will be there. Okay. So here, in terms
of enterprise support, access page for online
self based learning. So there is a training session also as part of this
enterprise support, the 15,000 package comes with a self paced lab
sessions and trainings. And then you're going
to have specialized, you know, support
team for you, okay? So that support team will help you on towards all your
queries and stuff. And then you will get
a consultative review and guidance based on your
application and solution. So here, contextual. So here is a
consultative review. So they will review
your infrastructure. They will guide you
what is the best thing, what services you can do and
how much you can save money, all those things,
they can guide you. You will get a
dedicated TAM manager. So you will get a technical
account manager just for your account and that person will be dedicated
for your account. So that's something. So you actually pay them
for a much more nicer, you know, approach
towards your support. So they will be actually
giving you training. They will be actually
giving you support team, and they will also have a technical account
manager who is dedicated, you will have their cell number in case of any kind of issue. So likewise, if you
want to join this, you can go for support. AW support. I think I don't
think that's the one. Um, yeah, so support
plan over here. It's currently in basic. So if you support plans, you can see that over here. The basic support plan is having not much feature
or many features, so you have the other
plans over here. The basic is not even
coming in this, okay? So you don't see basic anymore. You have developer
business, business on ramp and enterprise. So all those things, what
I told you is over here. So developer has pretty much
two, three items over here, and then except that,
everything else is like mostly the enterprise solution. So here you have
the support page. So this is where you
will be actually raising cases with AWS. But as a basic one, so you have very less
availability in towards you have I've raised the limit of the quota
increase over here, so that's the reason why I have two support
tickets over here. But these are like
normal general tickets we can open, okay? So that's something
which we can do. Alright, that's all I have to
tell you about the support, so you can see that the
basic subo plan includes all included for all
AD support customers. So that basically means
that you can raise tickets, but then you will get a response
when you get a response, like 24 hours mostly. Thank you again
for your patience and time. I'll see you
on the next video.
4. Labs AWS Introduction: Hey, guys, welcome back
to the next video. In this section, we are
going to talk about a new, you know, brand new
item called AWS. Now, I'm going to give you two
options over here for you. You can use Cloud based, what are you seeing right now. For this example, we
will take AWS and create your instances in AWS and have these instances
run on the Cloud. Now, do remember
that this is going to be costing you
a little extra, which you need to pay it
on the Cloud services by AWS. So that's one thing. The second thing is that I'm going to give you
a laptop based, you know, your own building up your own boson mission
using free wares. So that's also another option. But do remember that
your laptop needs to have at least 12 GB of
memory to start with. Eight GB, yeah, you
can do with it, but it still struggles a bit. So 12 GB is always recommended. But you don't have
that configuration, and you have some
money to spare. Like, for example, you can spare about 50 to hundred
dollar based on, you know, your usage, right? Then you can actually go
for E W. So I will be telling you how much pricey it's going to get
later point of time, but it's around, you know, we will try to
make it a minimum. We'll try to make it below
$50 as expense for you. So we will do the price
calculation later, and it depends on the requirements which
you're going to have. Uh, do remember that there
are three different ways of using AWS uberni service. You can either go for a
managed or unmanaged, but this training wants you
to be focused on managed. Um, I mean, not on managed. I want it to be like, you know, you need to configure
entire thing. So you should go for something
like a self managed item. So which means that you need to manage the
instances yourself. So we will be creating
EC two instances. Rather than the managed
instances, as in, like, managed in the sense that AWS itself manages the instance, which means that you can go for ECS or EKS in that scenario. But for this training purpose, you will be going for self managed
infrastructure using EC two instance so that
you can actually learn about uberniti's
operations. So it's not just
about Kubernetes, we'll be working on Docker also. So our training will be I
mean, starting from Docker, so we'll be starting
from Docker from here and then slowly
building up for Kubernetes. So do remember
that this training will be starting
off from Docker. And that's how you can easily build up the server
for Docker and then do all your hands on and then add another server and then make
this part of Kubernetes. So you can divide this session
into two sessions, first, few videos for setting
up your basic master, which you can use your Docker
installation and stuff. And then later point of time, you can add another
EC two instance, and then you can integrate the worker note for your uberts or worker note for
your Doc swamp. So both integration is possible,
laid up point of time. So I'll let you know
that, you know, on the videos itself, that you can start Docu
program right now, and then you can
start uberts program, so that you can come
back and revisit this and then take
it on for uber itis. But if I'm going
to divide this, I have to create two
different sessions, so I don't want to
do that because it just creates more confusion. So do remember that
this is add on, so this is not
originally designed, but then later I'm adding this on on top of an
existing program, which is already running with, you know, your laptop
or free server because a lot of people are
requesting services on AWS. So I'm just trying to add this on on top of the
existing program. Do remember that this is
AWS comes with a free tire, but it's kind of useless for, you know, for our session. Now, let me explain to you why the free tire comes with, like, some free usable stuff, which is available
for 12 months free. So you must be
excited to do this. But the sound 50 hours, which is 12 months, actually. Is given only for two of
the EC two, um, you know, machines or compute, which is your T two micro
and T three micro. So do remember that
the Gubernats comes with a default recommendation
that you need to have two CPUs and two GB of ram per node with a 20
GB free disk space. So diskpace is not a
big deal right now, but but the two CP and two GB is an
issue for us because when you look at the
sound free hours free, you will see when you go inside within the AWS pricing and
tags and instant type, you will see T three
Micro comes with two virtual CPU
and one GB of RAM. And if you go T two
microseri, it comes with one. We'll show CPU and
one GB of RAM. So which is kind of
pickle, isn't it? So, you cannot actually
go with any of this. So I would I would
recommend a T two medium, which has four GB of
RAM and two CPUs. That's perfectly fitting
our requirement. But do remember that CPU credit, you know, per hour is 34. It's very low, so
it's not like really high because it fits our
bill perfectly T two medium. I'm also looking out
for other options like three t three medium, so see if it fits our bill, but it's almost the
same in terms of the pricing wise because it has the same similar
configuration, except for the different
CPU architecture. So I'll just stick with T two because it is more
or less used in many situations because it has a much broader
CPUsation and it is, like, a very low
cost in terms of the beginning a level
of your compute. So it's going to
be much lower cost when you run it for a
longer period of time. So this is what I'm
recommending here, which we will be doing
as part of the training. Firstly, we will create
two micro Docker machine as a master host for Docker, and later we will convert
it to Kubernetes master, and then we will create
an AOT two medium for the worker node. Alright. So this is completely we will start with the docker, and then we will go to
Kubernetes afterwards. So it's pretty much what I
want to cover on this video. And the next video is onwards. Like, I'm going to cover
creating your free accounts. So I'm going to tell you
what actually you need to do to create a free
account, and you, for sure, need a credit card, so have that handy before you're going to
create a free account. You may ask me, Oh,
it's a free account. Why do I need a credit card?
Yes, this is how it works. So even if it says free account, that doesn't mean that you
will be you will not be paying anything at the time of creating this account
or using this account, but they just want to run a credit check, not
a credit check, but then they want to, you know, check whether you have
a valid credit card by doing a test transaction. And once they do
the transaction and tested your credit card
and your verification, you need to also verify
your ID, your public ID, like a license or any kind of government issued ID. You need to verify that. So once you have done
all the verification, then you will be able to
grant access and then able to do all these other services like creating all instances
and stuff like that. Yes, for free instance, also, you still have to do the same verification before you go ahead and do
all those things. On the next video, we will see creating a free account,
and after that, we will be exploring
the default settings. The first few things you
need to do like billing configuration because we don't want to miss any
billing alerts right. We don't want to
bill for anything. So we want to do the billing
alert configuration. And then we will
go ahead and to uh a third party authentication as far so that you don't lose
access to your account. So we will do, you know, Google Authenticator
authorization as well. So just to secure your account, much more with third party,
you know, authentication. And then we will be
working with ECT instance, creating instance,
and then we will be installing softwares
which we require. And then we'll go about learning that a few things which we need to do for Kubaits as well. Thank you again for
your patience and time. I'll see you on the next video.
5. Labs Before Creation Free Account: Hey, guys, welcome back
to the next video. Now, in this video, we are
going to look at AWS free. So on the next
video, we're going to do the free account creation, but then we are
going to understand about the free instance and the free account and the operations you can do after you get this
account for free, is what we're going
to see in this video. Now, I'm just Googling
for AWS free. So this is going to get me
some links towards AWS, which gives you the free tire. Now, what is a free tire? Free tire is an
evaluation product, so it has a time uh, time limit of what
you can do and how many products you
can actually use it for free in those timelines. So if you want to
know, like, you know, what are the details
you can actually click on this and you can
actually open that, and you can understand
what is actually free when you open an account or create
an account with AWS. Now, you can see that this
is your AWS free tire. We call that concept
as free tire. This is where you
get your hands on experience on AWS
product and services. It's like a demo they give you so that you can
understand and work with AWS before you actually buy any of the products or use
any of their products. That's the major reason. Learn more about AWS free tire. You can actually look
at this right now. Not everything, all the services
which is in AWS is free. Do remember that are
the free trials, there are the 12
months free and there are always free
items infinitely. These are three
different things you need to know before
you actually proceed. We're not going to see in
detail of these three things. It's not going to be something which is going to be
asked on your exam. But it's good that if
you know about them. But if you know the overview, that's going to be
very helpful for you. You can click on
each of this item, you can actually see what
are the free trials, what is 12 months
free and stuff. But when you come
down a little bit, so you get a overview
of everything. So you can see the
items which is, you know, tire typed over here. So the featured item, 12 months free,
always free trial. So you can actually select that and you can easily see that. What are the things, which
is actually on trial basis. On trial basis, it will give you a good amount of time for
you to try that product. And here, Sagemaker is
for two months free. But there are
restrictions over here, which is written in small
font over here that you need to be in that kind of
a you know, instance type. So instance type, we will
talk about it later. But it's kind of a restrictions on something which they
give it for trial or free. So you see 12 months free, you have EC two instance, you have storage, you have
RDS relational database, like the traditional database
like MCQLPostGrace SQL, you have the Maria DB
and stuff like that. Open search, APA gateway. Likewise, you have
products over here, which is which has a time limit of 12
months or so 50 hours. You have two
restrictions over here, either 50 hours per month or
it's going to be 12 months. Likewise, you have
that two options over here which you
can actually select. Now that you are aware that what products are actually
t months on trial. So there is something
always free. So always free is where here, you can use 25 GB of DynamoDB, storage, and here you have 1 million transaction
for your Lambda. So these are something
which is free always, and it does not come
with a time limit. Okay. So when they
say always free, it means it doesn't mean that
it's going to be free for, free for whatever the
usage you're going to be, but it doesn't
have a time limit. So after a year also, I can use this 25 GB
of a free dynamo DV, um, you know, storage, as well as, like, 1 million. Whenever I want to use
it, like one year, two years, doesn't matter. But when I start with 1 million
will be like free for me. Okay? So likewise, but this 12 months freeze
basically tells you that restricts you that it's
going to be free for the first one month
since you have enrolled for your AWS account. So the timeline
starts over here. Trial, again, on the same basis, it's going to be like that. So when this offer is
there, you have to use it. Free trial is basically
two months free trial, but then there is
restrictions applied. Same thing goes for
EC two instance also. So it doesn't mean that
you can create any kind of instance when
you create EC two. It is mounded to
these instances, as you can see over here. So likewise, you have the sound 50 hours
per month on Linux. So there is also restrictions over on your operating
system as well, and then T two Micro
or T two Micro. These are the two instances
types that you can use. So some agency that launch is unlimited mode may incur
additional charges. So likewise, so you cannot use unlimited mode over here
when you try to launch it. So there are some restrictions
which you need to follow. And I'm going to handhold
you in this whole process, making sure that
you're not being charged for any of this. Okay? So that's going to be
something I'm going to do. Even if you are getting charged, I'm going to tell you how
to restrict yourself, and when you're going
to get charged, you can be literally careful
about working with it. So I'm going to give you some
instructions towards that. But anyway space, thank you again for your
time and patience. So let's get on
to the next video and create the free account.
6. Labs AWS Free Account Creation: Hey, guys, we are going to
create a free account here. So most of the items
should be filled by you because I don't want to give
my actual information here. So I would recommend you to fill it up most of
the items over here. This video is very short. So firstly, when you click
on create a free account, so you're going to
get this pop up over here saying you to
sign up with AWS, so you can type in
your email address which you're going to
use for this account. Do remember this is called
as a root email address, which is used for recovery
in case of any kind of, you know, you forgetting your password or anything like that. And this is also
administrative account, which means that you will get 100 percentage access
to the AWS console. So which means that
modification of your name, your primary email address, your password, your
security information, your access to IM. So this is all very much available only for the
root email address. And those are available only
for this email address. Do remember to give a proper
email address where you are, like, having a complete
control over it. So for me, I'm going
to give beyond Cloud AI one@gmail.com because
Cloud AI is already there. So gmail.com, and
then I'm going to give account name here
beyond Cloud AI one. Now, this is going to
send an email to me, and it's going to verify
my email address, then I just need to
give in the, you know, verification code here,
whatever the verification code, which I've got from my email, so I'm just verifying
my email right now for the verification
code, and I got it. So I'm going to copy page
this verification number, it's a verification
number for one time. So you're going to type
in a password over here, so this is going to be your
password for this account. So I'm just going to give
a random password here, which I'm not going to tell
you what it is, of course. So I'm just typing
in the password. So yes, I do have
a large password. So it doesn't match, actually. So let me just pause the screen. Alright, typed in the password
here, just click on next. Now, here you need to
select what kind of, what kind of, you know, usage you're planning
to do with AWS, like a personal project, yes, because this is for
learning and development. So here you have the full name, which is beyond Cloud AI. So here you need to
give any kind of number of your phone number
or any kind of address. So I would recommend you to give where you live currently. So if you are located in let's just say I'm
in United Kingdom, and my number is so this
is my number, let's say, say, for example, select
the region again, as United Kingdom and
then given the line over here, testtest test test. So I just giving random number over here and click
on next step. So now, this is going
to enable you to phone number shouldn't know
it's not valid letters. So at another zero over here. Yeah. So let's just give
you a next step over here. And this is where
you need to give in your credit card details. Now this credit card detail
should be given by you and you need to enter
the credit card detail. So once you have done this step, then it will be going
into your bank website. If there is authentication which needs to be done from
your bank website, you need to complete the authentication
from bank website. After this step, it will
be asking you for um, it will be asking you
to validate yourself, which means that you need
to provide your real name, and you need to give
an ID, um, I mean, of your driving license or
voter ID card, some countries, some countries has
this, um, you know, um, tax numbers or taxation
cards, something like that. So you need to give those detail and then verify yourself. And once that is done, you will get an email
notification saying that your account is
verified successfully. Immediately, you will get it, and then you can start
doing or going to the next screen and next screen would take you to the console. So it will go to
console.aws.com. So this is how the account
creation processes. I try to show you
as much as I can. But beyond this is out of my limits because I already
have another account, so that's something
I'm going to use. I do fill all in
the information, but nothing major right now, just that your credit
confirmation and to verify yourself as a
person or a person. So there may be a
question asking you whether you are an
individual or a company, make sure you select
the right option. If you're here just
for learning purpose, then you can select individual because you are here
just for training and, you know, understanding purpose. So you can select as individual, and it is for personal use. Uh, so, likewise, you
can select it and then verify yourself and then go
ahead to the next video, which will be continue
from the console page. Thank you again for
watching this video. I'll see you in the next one.
7. Labs AWS MFA and Budget: Hey, guys. So now that
you've created your account, and let's just say that
you've logged off right now. So now you will be given two different
varieties of access. Like, for example, you will see that when you go to
Google and Type and, like, like, for example, that I'm using Bing I'm
just saying console log in AWS and then just
basically gets me to this website where it says,
create a free account. Now, rather than
create a free account, just click on sign in. There are two different ways of signing into your account. One is basically getting
it like this where you Google console aws.com. Or you can actually have
your own link to aws.com, which I will tell you in a bit. So that's the second way
with your account ID. So it's something
called account ID, which gets created by default, so you can use the
account ID two, rather than login
into your root email, you can directly log
in with account ID, and there you can give
your account name. If you remember, while
creating the free account, it asked for account
name, right? So I've given Beyond Cloud one because Beyond
Cloud already exists. So you can use that and
you can also log in. So that's a different
method of logging. Now, which method
are we logging in? We are logging in
using root user. Now, there are two users
you can log in, right? So one is a root user, another one is IAM user. IM is nothing but identity
and access management, which is part and parcel
of your AWS service. Now, IAM enables you to
add multiple users to your AWS account so that you can give them
different privileges. Let's just say, for example, I own a company and I have few employees of mine who
is actually working on administration on the websites or application which
is hosted in WS. So I can create individual
logins for these guys and give limited access
towards what they can do in terms of IAM
privileges over there. So whenever they
do some activity, they will be able to do certain activities which have allowed them to do it because IM lets me control fine grinded permissions towards what they can actually
achieve using AWS. So right now, I'm going to
use Root User as that's the only user available
at this moment because we just created the
account right now, and then we're going to
log into our AWS console. You give you select
the root user, and then you type in the
name of the root user, going to be your own
cloudai@gmail.com. So this is my website. Don't
try to log in to this one. Use your own email address
and log in to yours, and then here you have
to type a password. I'm going to copy paste
the password here. I'm just copy pasting it over here and hitting
the sign and button. Now, never say for this site. Perfect. I don't believe
in saving the password in the browser or in Chrome
because it always gets exposed, not now then later. So now you can see
this is pretty much my console over here. I've visited two
applications over here, which is part of
your AWS services, billing and cost management Im. So on the right side, you will see the account
information over here, so you will have your
unique account ID. Now this is how you can
create your own website. So I'll tell you about
that in a short while. But do remember that
there are some, you know, private information
which is like giving out over here when you
go inside accounts. So be careful when
you're sharing your screen and going
into your accounts. Neither I'm going to
be much careful in sharing my screen while
going into the account. So you can expect this video to get paused multiple times if there is any kind of
exposure of my private data. Now, the first thing you're
going to do is, like, enable two factor authentication or enable a third party
authentication over here. So I'm going to go
to IAM over here. Just type IM, identity
and access management. So under identity and
access management, it has a dashboard which tells you what is missing
as part of your, you know, account in
terms of security wise? So this is a security
recommendation. Now, it has found one
critical problem over here, so you're not enabled MFA
multifactor authentication. So you can use a third party to integrate your
authentication with. So instead of just
using M and password, it will also have
authentication information. Now, how else can you
access your console? As I told you, there
is another way of accessing your
console directly, right, by using this
unique identification. You can also create an RS,
which I'll talk later. But this is the URL
I was talking about. So using this URL, you really don't have
to give the root name. So if you're using this URN on a different browser,
it will be coming up. Let me open a
private window here and then search this URO, right? And you can see that you kind of go in with your con number, and then all you have
to do is like beyond Cloud over here as the IAM
name and then the password. Now, if you have created
other users as well, which is part of your IAM, you can directly
enter those usernames and password and sign in. So you really don't have
to give any kind of a root email address
anywhere because your account ID will act as
a root email address, okay? So you don't have to expose your root email address
to anyone as well, so you can keep your root
email address, very safely. Okay? So that's the major reason why an account ID is created rather than
root email address. So that anyone can
access your account without accessing your
root email address. Alright, so now that is a reason for you
to have this URN, and you can circulate this
URL to all of your friends, I'm sorry, your coworkers
who needs access to AWS. Now, training an MFA, so for root user, so it's click on add MFA. So now it's going to redirect
you to the MFA page, and it's going to see
like a root user. MFA. So that's basically saying that this is
your root user MFA. And here you have
multiple options. I'm going to select
authentication app, and I've not set it up so far, so I'm just going to
do that right now. So install this on the
compactible computer. I'm going to pause
this show a QR code, and then I'm just going
to verify my account, and then I'm going to unpause
this once this completes. So for this very simple, open your Google
Authenticator app on your phone. Click on this. I will show you a
code, scan that code, and you need to enter the code from your
virtual application. That is your Google
Authenticator first time and then
after 30 seconds, one more time with next code. And then click on Add MFA. Once you have done that,
you have set it up. So let me pause the screen, get it done, and show
you how it looks like. There you go. So once
you click on that, and you can see that
your virtual ID has been added over here, and the data is added on. So this is pretty much very straightforward to
access or to add your, you know, MFA for the
user. So that's completed. If I just go over here
and refresh this page, the recommendation has
become green right now. But it is always recommended
to create a user and access your AWS through a user rather than
the root account. So that is always
recommended to do that. We'll talk about
that a little later. But as of now, what
we will do is, like, we need to go to account and
set up our billing services. Now, under the accounts, you are one of the
important feature, which basically is your details, which you can see right now, which I will definitely blur
it on the post editing. So yeah, pretty much your details is something
which is under account. On the left side, you will have, you will have billings
and cost management. Let me go back to
the home screen. Um, so we just go to billing and cost
management over here. So this is pretty much what
I wanted to cover, actually. I accidentally clicked on that. So now you have the option of creating
your own will or budget. Within the option of budget, you will be able to
see how it works. So basically, you
create a budget, and that's what
you're doing here. And then when it raises the
85 percentage of your budget, it's going to trigger alert, which means that you're
going to get email. So you need to make sure that
you monitor your emails, and then basically
gets, you know, you get to tweak what
exactly is the problem. You either, you know, cancel that request, which is actually breaching more
of your conception. So that's pretty much
what we want to do. So let's just go ahead and
create a budget over here. And then we would go
with the template, which is simplified method
of creating a budget. And then here is, like, a new budget which talks
about zero spending budget. So you don't want to
spend anything on AWS, so you want to do all
the free tire work. But the problem is
that our application doesn't make free tire stuff. So you go to a
monthly cost budget. So here, I'm going to
give a maximum of $20. So which means that
you're going to get an email in case if you are spending
more than 20 I'm sorry, 85 percentage of $20. So here is what was mentioned. You will be notified
when you actually reach the spend limit has reached
85%. That's first scenario. Second scenario is that when you're actually spend
reaches 100 percentage. So you're going to
get two alerts, one at 85 and 100 percentage. You also going to get an
alert if you forecast spend, like ArabS is forecasting
that if you go in this range, you may reach 100
percentage this time, it understands that you are on the path of
reaching 100 percentage, it's going to send you on alert. So these are three different situations you'll be alerted, but budget will never
stop your spending. So if you miss your email, if you miss reconfiguring your spending allocations for your items, it's
not going to stop. Okay, so it can go beyond
that percentage of allocated, but budget will
basically, you know, notify you when something
like this happens, but it will never
put a blockage on your spending or it will not stop the services
or something like that. It will only alert you in case of any kind of items like that. Alright, proceeding further down the line, click on email. So I would say beyond
cloud one atmail.com. And then if I have
additional email addresses, you can always give command, then you can put in
multiple email addresses. Then click Create budget. This is going to create
a budget for you guys, and it's going to send
out email if it kind of, like, comes to Are percentage
of the total utilization. Now, for beginners,
you can put it from budget of $10 as well. So this will be helpful
for you to, you know, make sure that you
kind of control where you're going
and your amounts, where you spending leaks, so that I can find those
weak points and then fix it. Pretty much what I want
to cover on this video, we have covered a couple
of items enabling the two factor authentication or multi factor authentication. We have integrated
our AWS account with Google, Google
Authenticator. Now, the second one
which you have seen is that we have also seen my account information which is not expected for me to show, but that's going to be blurred. Anyhow, thank you. The
next one which you have seen is also about the billing and
the cost management. So that's something like a
budget we have seen right now and which we are going to
get an email alert if, you know, the budget
breaches our expectation. Thank you again. On
another interesting video, we will see setting up a user on the next video so
that we can access the AWS using our
limited privileges rather than all privileges
which we have right now. And post that, we will be
working with, you know, creating AC two instances
and looking at the budget, how it's going to be by having
that running for, like, like next 4 hours or 8 hours, how much cost it's
going to come for us and will we be ready for, you know, taking that
cost or taking that hit? So that's something
which we will see on the next few videos. Thank you again
for watching this. I'll see you on next boon.
8. Labs Over All Review Of Services Covered in the Domain 1: Hey, guys. Welcome back
to the next video. Here in this video, I'm going
to show you a little bit of tour on all those
services which we discussed. So we have discussed
about a lot of services. Now, I just wanted to give you a lab session on
accessing these services, just going through some of the, you know, the basic things about these services and stuff. So we're going to go
through that and we are going to find out or guess the actual pillar of well architected framework
because these are services which is I
think you would remember much better if I just show
you the pictures of it. That's the reason why I
do this on the end video. On the previous this video, there is a cheat sheet that has all my run about all those theories of whatever I discussed with
you is all over there. I have mentioned it
with example as well, so please make use of that and try to get ready
for your examination. Now, do remember to leave your reviews and everything
because that makes this course different than
others because I am trying my best to make sure that everyone goes
through this program, does the theory session as well as the hands on
session continuously, and you can see the
efforts I'm going to put in more going forward. So let's start this video.
Now you can see that. These are the
pillars and some of the services I've opened it because my Internet
is a bit slow, so I've just kept it open
already for you guys. I'm in the dashboard right now, so this is your console, home region, US hyphen East one. From the URL itself, try to read information because your URL tells a lot
of things actually. I'm in the North
and Virginia region over here, US East one. Now, this is the first thing I want to show you is EC two. This is more often we would
use later point of time. Though we have managed services, but EC two is, like, the more frequently we'll be
using later point of time. You can see load
balancer over here, which is, again, a feature
which we have seen. So if you had to guess it, the EC two is
performance efficiency. We have auto scaling, which is, again, a feature over here, auto scaling groups over here. See load balancer, which is
all part of EC two instances. Now, I can create instance. I can start running by
launching instance. I'm not going to
do that right now, but there is going
to be a special training for you
later point of time. Autoscaling groups
enables you to create more service by scaling them up and down based
on some trajectory. So some kind of a
monitoring value we would give and then
create some group, and then beautiful thing
about AWS is that, you know, every feature you go,
every service you go, you kind of see this kind of information where it gives you this pictutic representation of what you can achieve with this. And this one picture can
tell a lot, actually. It tells a huge amount
of story over here. So this picture will give you a clue what this service
is all about, isn't it? So you can see that minimum
size, desired capacity, maximum size, and this is basically scaling
out as needed. So this is picture perfect. You have the load balancer
feature over here, and that's also something
which we saw on this. Let's see where is load
balancer over here, you have reliability under that, you have load balancer. Likewise, you have
features like that. Mint is a bit slow, someone I guess is
downloading something, so that's the reason
why it's a bit slow. Please disregard that.
Now this is RDAs. This is database
managed service. I can say one click, you can create a database. You don't really have to
install an operating system, you don't have to
install the software. Uh, no need to configure
firewalls and, you know, the settings
and everything. Just click on it and you
will get a database. So this is so good, isn't it? So that's why we call it a managed system
over here for DS. And then you have the DMS, which is your database
migration service over here that is effectively migrating your
Impromised database to the cloud cloud database. This is Route 53. As you can see over here, you can enter a domain name, and you can transfer an
existing domain to Route 53, and you can actually
register domain. So you have to pay some amount, and basically you can have
your domain registered. There are two hosted
zones over here, as you can see, because I'm
using open shift earlier. So um, but has actually kind of created two zones where it's just open sheet
as you can see, it uses route 53. So I'm paying for this until I remove these
things because when you have any kind
of Dina's name or anything managed by AWS,
you're going to pay for it. But it's not a huge price. I would say, so you can see
that in my built right? So you have the
classification over here. And um it's not much in
terms of, uh, doing this. Yeah. You can see
that Route 53 $1. So it's not much of,
you know, money. I had to pay for it. This
is the load balance. Currently, I have load balance for open shift, so you in that. Here you have IM
dashboard over here. This is your identity
and access management. In the next section, we will be dealing with
security and compiling. So here we will be
visiting over here, and we will work with
AWS organization, Idntity Center, and
all the other options over here in terms of
access management. So I think in the starting video when
you created the account, I have told you how to use MFA. So you can see that
I have enabled MFA. MFA is very, very important
because these days, everything is getting hacked. The only thing which can
save you from hacking is actually give you a
little more time is MFA. So that's very much useful. And then you have KMS. We have also
discussed about this, I guess in security, I guess, KMS key management
system over here. I can see over here, this is a way of
creating keys and controlling encryption
across AWS and beyond. So if you are using keys in your company with the root
certificates and stuff, so you can still store it over here. That's why I say beyond. Um which means that it is a
fully managed key service. So normally, um, uh, I have used, um, key service previous
to previous projects. So we have used there's a
company Tavat or something. So they have this
key service as well. So we used their service, and we have actually had their system like the key management system,
it's like a server. We have put it on
our um data center, and we have worked on that. I have experience working
KMS service as well. But, um but this is much more easy and
you don't really have to have the overhead
of holding a server or buying a server from
any kind of a vendor. Um, yeah, it is Talus. Talus is the, you know,
middle man for this, which actually we
actually contact Talis, uh, for the service and stuff. So they are the
ones who did that. Um, sorry about that.
I got the name wrong. Um, so here you have the
centralized key management. So this is basically your
HSM Cloud HSM over here, which basically holds
all your keys and registries and everything else related to your certificates
and stuff like that. So you can just create and
you can actually work on it. So the next one is the Elastic
Container Service and EKS, which is Elastic Cubont service. Container Service more
or less uses containers, and it basically has, you know, you can create
clusters on cluster. This is again, managed service. And here you have
EKS, which is again, managed service, which
uses cobonts over here. Now you can create
clusters over here. We'll talk about that
in advanced session, but it is so interesting. As I have open shift, I have good knowledge
on open shift as well, so I may be creating a
training on that exclusively. I have other trainings which
is involved in open shift, but then mostly on middleware.
So that's how it is. Um, over here, I have the
billing and cost management, so it just tells me
about the billing. I have some bad news this month. I have to pay a lot of bill over here because thanks
to open shift, so it just created a lot
of instances for me. And yeah, it did give a good experience of, you
know, working with it. Uh, because I completed more trainings on open
shift using other um, you know, middle products. Um, over here is the
well architecture tool. So this is basically what we learned so far
right the pillars. You can see that that
itself is a service. Now, you can see that
how it is going to work. Now, you put in or identify
your workload to review. AWS well architecture
tool will take into consideration of your
operational excellence, your security, reliability,
performance efficiency, cost optimization
sustainability, and it's going to give
you a report out of it. So that report is going to be all about the pillars of AWS, and it's going to
show you what kind of workload you're going
to be involved in. So what kind of possibly how
much would be your savings? All those things it
will be looking upon. Then you have the
billing dashboard here. So this is basically, you know, how much percentage of computer I've used and all these other items,
which is over here. So it tells me about
complete billing, and that's another
service over there, and you have the
pricing calculator. This is super tool. I always use this tool
before I do anything on AWS because this tool is
called calculator AWS, so you can directly
access that and come to AWS pricing
calculator over here. Just click on estimate. It basically give
what you want to do. There's so many surprises here. I will not spoil them. But
when we come to this session, we will review this
pricing calculator stuff before we do configure
EC two instance. But there are a lot of
surprises over here. Here you choose the region, here you choose the
name of the region, and here you find a
service EC two instance, Lambda or any kind of service
which is offered by AWS. Then you say that I
want to view summary, and then it basically ask you more details like what
are you planning to do? How many hours are
you planning to use? You give all those
information and it tells you, um, two costs over here. Upfront cost and monthly cost. So upfront cost tells you, like, if you pay upfront, how much discount are
you going to get? And this is going to be
a monthly cost, okay? So here, it was a total
12 month cost, okay, saying that, you know, for the next two months, you
will be paying this much. So it's very much
of a useful tool. And you definitely
need to use this tool in case of doing any
kind of activity on AWS, you can actually
understand what's going to happen into its,
your billing cycle. The next one, which you want
to see is cloud formation. We have spoken enough, and this is part of
your automation stuff. So this cloud formation comes in over here,
operational excellence. So this particular tool
over here where you can actually create a
sample template and basically get started. So what this day does
is like automation. So you have a YouTube
video as well from AWS, and you can actually watch that video and understand
how to create it. But don't do it right
now. Right now, you focused on getting
your certification. So let's do that. Let's go one by one, because I myself will do a training
on cloud formation. And we will discuss about it then and there
because it could be a long way ahead and we don't want that
complication at this moment. But we will
eventually get there. Thank you again for
watching this video. I hope that you understood
a little bit of, you know, hands on towards whatever
we have seen so far on the Domain Cloud concept. So these are some of the
things which we're going to review in much detail
later point of time. And some of these is going to be like game changer for you guys. If you're new to AWS, you're going to be like or this is something I should
have known earlier. So this is what I
felt when I got into AWS and I'm so happy that I
was able to teach you that. Thanks again for your
patience and the. And if you do have
any questions, please leave it on the
questions section. I mostly respond on the weekend, so I'll do my best in responding to you as
early as possible.
9. Labs Automation (AWS CloudFormation): Guys welcome back
from the next video. In this video, we
are going to talk about the benefit of automation. In this video, we are going to see some architecture words, provisioning and, you
know, automation stuff. Well as live, we're going
to see some hands on towards working with
cloud formation. Now, we're not going to
create any cloud formation, but then I'm just going to tell you about the format of it. The picture here represents
the automation feature. Now, do remember that it
is very important for the economics of cloud is that you involve less
of a manual activity, more of automation and
innovation on your project. Now, to do to implement the automation in the
terms of cloud formation, need to have the
proper tools for that. So the first tool which
you need as a service from AWS is that you have
something called CloudFront, I'm sorry, cloud formation. So now this particular
service would help you create an autonomous server creation using something called
cloud formation. Now this uses a template. Now, if you just go to the
documentation of this in the AWS documentation
under cloud formation, you can see how cloud
formation works. Here, you have to create or
use an existing template. There are example templates available in this
documentation itself, and you basically save it
locally or on S three bucket. So locally means you
create a file on your laptop and then upload
it to the cloud formation. Or you can put it on S three, which is cost binding. So you have to pay some for
what you upload on S three. Uh, so that's something
which you should note. But saving locally will
not incur any cost. So you can save, example, this JSON file, you can actually save it
on your desktop and you can upload it when you are
creating a cloud formation. So that way you don't have to really incur any
cost at that time. Now, what it will do is, the cloud formation will use the stack which
you have created, and it will start
building the stack. So it will use a template which is in the JSON file format
or Yama file format. As you can see, these two
file formats are preferred. Uh, either one,
you can upload it. Basically, recognize
whether it is a JSO format or Yamo format and then it will load that stack onto your cloud formation, and then it will actually show you the progression towards
creating the stack. Now, um, well, so you can also click on
View Getting started. So this will go to
the documentation, again, the same page
which I was in. So this has a YouTube video
getting started and then how to do a cloud formation, sign up AWS and best practices and working with templates and
stuff like that. So that's basically what
I want to show you. And a cloud formation will
help you create stacks. Now, what is stack?
Stack is basically a set of service and server
related items. For example, uh, when I use the open shift to
create, service using AWS, open shift will not just
create EC two instances, it will create VPCs, it
will create load banners, I will create elastic IP
address and assign it to my service and the service is attached to it that network interfaces
and stuff like that. The stack involves building up of entire stream of workload, which will enable me
to work on a project. Okay. So that is
what a stack is. So when you create such
stack on cloud formation, it reduces the manual
work of people going and going to each service and doing the manual creation
of these VPCs, elastic IPs and getting all those load
balancer created on all those EC two
instances hosted. So within 30 to 40 minutes, it will create the
entire infrastructure and have my application hosted. And that is a power
of cloud formation. Now, you can either
use cloud formation in that sense or you
can use terraforming. Terraforming is another
third party tool. Now it has been
purchased by IBM. And even you can use cloud formation or
terraforming tool. These two options are out there. You can use either
of these options to create such an infrastructure. As we are doing
AW certification, so AAA certification
strictly follows us to work on cloud formation
rather than terraforming. So that's pretty much what is automation and what's
the benefit of it and how to even achieve that. I've shown you the
architecture as well. So that pretty much sums up
this particular session. Thank you again for
watching this video. On the next video, you'll see
about the managed services.
10. Labs Managed AWS Services (Amazon RDS, ECS, EKS, Dynamo DB): You guys welcome back
to the next video. In this video, we
are going to work about this particular concept. So identifying managed services, managed HW services,
for example, Amazon RDS, Elastic
Container Service, Elastic Gubernat
service, and Dynamo TV. So we will look at each
of these services, how it actually works in
terms of the hands on. In terms of architecture, I just want to make
you understand the difference between a self managed service
and a managed service. You can call it like a partially managed service as well or, you know, not managed
service at all. In case if you are hosting an application on
your data center using a Worsham machine, your Linux administrator
would actually create, firstly, your PMware
administrator would provision a server instance and then the installation of
the operating system will be done by the Linux admin. And then once the operating
system is installed, then the Oracle database
administrators will install Oracle software on the survey once the network firewall
is all completed, and then you insert the application data or your customer data
into the database. And then as administrator, you'll be able to
access it, like the database administrator. And then the applications
would connect to the database and
start that transaction. So this is what a typical, um, you know, self managed
service would look like. You go for AWS EC two instance, there are two items which is going to be taken
off your hands. One is the virtual
machine creation as well as the operating
system installation. These two things will
be done by EC two. EC two will get you the proper instance type
and the operating system, and then it will
give you a key file and using that key file,
you can access it. Directly, the Oracle
administrator will actually come in and install the Oracle software and then prove the licensing
and stuff like that. And then he will insert
the customer data, and he'll have access
to that application, and then the third
party applications would connect to Racal database. So this you can call it like a self partially managed service because partially it
is managed by ECT, okay, where the server
creation, server location, and the operating
system installation is all done by Amazon itself. Okay? Now, we talk about something called
managed services. So there are these many managed
services available in AWS. Okay? So of displayed over here, but there are a lot of
managed services available. We are going to in
turn understand about the RDS right now, relational
database services. So now, as we have Oracle
database in bold here. So all we have to
do is go to RDS, say that I need a database, S that I need Oracle database. Okay? So Oracle database comes with self managed bring
your own license. So you have to manage that
licensing part of it, so that, you know, it's always told by AWS
at the time of creation. And all you have to do is like, once you hit the Create button, you will have an Oracle
database which is ready. All you have to do is
find a way to access it, insert the customer data, use the administrator
to work with it and have the applications connected to that
Oracle database. How easy and simple
it is, right? So from the server
allocation, server creation, operating system installation,
software installation, all those things
will be taken care. Okay, you don't really have
to do or worry about that. You have at the
time of, you know, the creation of Oracle database, you may have to choose what kind of instance you want
this to run on. Because based on the instance,
you will be charged, and then based on the
software as well, the licensing but of it
will be charged with that. And what it will basically
do is it will give you the processing power and the software component
comes pre installed. All you have to is start
inserting the customer data, and application
should be redirected to OracoF it to work with. No need for firewall
configurations or anything like that
because it already comes with pre configured because at the time of
installation and sly you would have told whether this
needs to be accessed from existing VPC or it needs
to create a new VPC, or it will have a public access, all those things,
you will see it right now. So let's do that. Let's create RDS. Okay. I'm not going to completely go ahead
with that process. Just type RDS over here. You can see that relation
database service over here. Just click on that. I always do a middle kick so that
it opens on a new tab, and as you can see
over here, Uh, Aurora IO Optimized is a new cluster storage
configuration that offers predictable pricing for
all applications and improved price performance
up to 40% cost savings. So there's a new introduction
to a new service over here, and that's basically
talks about it. So here we don't have any kind of database instances
available right now. Now you can see that
pretty much this is your database
related stuff, okay? So now you can also take
snapshot of the database. You can also create a reverse
instance for your database, and then you get to create
database cluster up to 40. Now, go to database, and you can click on
Create New database. Now, you can create a
standard create or EC create. Es create is much easier. It will ask you less questions. Standard create is
much recommended because it will ask you
some good questions, actually, which is
really required for you. So you can go for MariaDB MSQO and then you can go
for Oracle as well. But there are certain databases which requires you to
bring your own license. So you can see that
IBM DV two says that licenses bring your own or
through AWS marketplace. So you have to click on this if you want to
know about what is the charges for you to use this DV two licensing.
You can see that. When you go here, it will open
up the DV two marketplace over here and you can actually
understand the cost of it. So cost is 1.04 per, I guess, per unit. Sorry about that. So
this is a hard license. Yeah, it is 1.04. Per hour. So that's going to be on two virtual CPUs over
here on one BPC, which is going to
be two threads. So if you're going to
add more virtual CPUs, you got to pay more, actually, so because it is also based upon the license is
based upon hourly basis, but then it is calculated into whatever the number of
PCs you have over there. So if it's going to be four VPC, it's going to be 2.08,
that's how it's going to be. So here that's the
pricing model over here. You can also read
some description about what this is about, which version is going to
be install as part of this and all those things,
you can see it. So by default, if you're going to say through
marketplace and the billing is going
to happen and you will see the final billing
per month utilization, monthly cost is 884 USD. This is including the DB
two instance, storage, and the provisioned
Iopsipposecon. So, likewise, you
have the option of choosing some of these
items and then disabling it, but you should see that
provision Iops over here. So if you really don't want
provisioned Iops over here, you can go for general purpose, SST, which will, again, save a heap amount of money
over here for you guys. But according to
the recommendation, you should have
performance oriented Iops which will give you
greater performance. Another thing which you will
see over here is sizing, the instance configuration,
the instance type. These are preconfigured
pre loaded instances. I'm sorry, I just
lost where it is. Okay, so these are
some of this ones, so you have only some instances, which is custom configured for this kind of
a database setup. The name of the
family name is sixI, but it says DB over here on the starting
because it's a custom made for your RDS for
hosting database. And it is family name is
six y, and this is a large. Okay. So you have ADX Large. You have six IDN, and you can see that it comes
with a network of 25,000 MVPs with instant storage of
1.9 terabytes of NVM V SSD. Now, when you select
something like this, your cost is going to high, literally going to high $6,983. So, likewise, you have Oracle Espo and what are you going to get
at the end of it? So this is the only
option over here for Oracle bring
your old license. And what is the end
result of this? What happens when I select MSCR, and then I hit a create button. So this is 11,386 USD. Per month it's
going to be because of provisional ops
is about 900 and DB instance is about
10,000 because I have chosen the
highest version of that. That's just the reason. So here I can choose for something
which is of a lower size, so I don't get charged
so much for it. So this is 1,400
still it's high. It's because of provisional obs. So these are something
which you need to work out, and it's about 496. So I Don't think we can reduce
it anymore than that. By creating a DB instance, without a cluster, we'll
actually have this reduced. I'm going to go for
Devon test, DB single. Now, this should get us to
$176. You can see that. The price is reduced because we are not go for clustering, and that has reduced the
cost to the minimum. So this is the
version of my SQR. So now, what happens is that
in these questions, right, there's going to be another
question whether you going to access your database
from public access. Normally, we say no, but, you know, in your situations,
it could be as as we. And and you can also see the existing VPC is used over here so the IP rage
can be existing ones. You can also create
Bastian host and then access this particular server through a Bastian host as well. So that is also
another option if you have disabled the
public access as no. Then here in the database
authentication option, you can use password
authentication and password IAM
database authentication, you can create IAM user for it and CBOs
authentication as well. So likewise, you
have multiple ways of authenticating
within the database. And then here the performance inside will give you
seven days free trial. You can just say that if you don't want the
performance inside, and then you can hit
the create button. So what's going to happen
when I hit the Create button? You're going to have a
database which is ready, and all you can do is access that database directly and
start working with it. I'm not going to
do that right now because we are not in the stage. Doing all those
things and neither do you have to do it
right now because it's all cost incurring items
or part of free trial, please be careful while you do the create button because that's going to incur
some basic charges, at least for an hourly charges, it's going to kick in. That's pretty much what I
want to show you on towards the RDS one in terms
of working with RDS. You have other services ECS, which is continuous service. You have EKS, which
is Cubone T service, and then you have NynODB
Dynamo Dib again, this is no SQL database, so that's also managed. So these are some
of the things which we are actually
asked to check out, you know, in our course. So let's just do a checking. So container service is
using Docker container to create containers and host our applications
into the container. Okay, so it will create a container infrastructure
when you create the cluster. So here you specify what
kind of container you want you want to serve a large container or easy to contain. EC two means that you can actually select
the type of instance. Fargate is a erlex container, which actually gives you
zero maintenance overhead, and you don't have to
worry about zero instances because it's going to have
Fargate spot capacity. So that's going to do that. When you compare the
Kubernetes services, so this is where you
have Kubernetes items over here so you can either
register or create a cluster. I mean, the existing
cluster can be registered in uh, the EKS. And here you give the
version of Kubernts and then here you give the other details
about the Kubernetes API, and then click on next
and then there is this network specification
can figure observability, and then some add ons, then you can create your EKS cluster. So these are different
ways of doing it. But anyways, we will
see in detail later. And here in terms
of the Dynamo DB, we have the Amazon Dynamo DB. This is a fast and
flexible NoSQL database, which is very
useful for scaling. So to create a very
simple create table and basically give the
name of the table, whatever you want to insert
as a table default setting, so you can do that, and then
you can create a table. So this will be a
no SQL database, so you don't need to
run any kind of a um, MSC acal database under it. So you can also export data to S three and import
data from S three as well and to be part of the tables which you're
creating as part of Dynamo TV. All right, pretty much, that is what we wanted to
cover on this video. So we have seen all
these services as well. Um, thank you again
for your patience and if you have any questions,
questions, sections. Don't worry that I'm not going in deep into it because it's not required to go in deep
for the cloud concepts. So we are going to understand
the concepts of Cloud, and this is what we are doing. And this would be the
questions is all about. They will be asking you
questions about Um, is DynamoDB, managed service
or no managed service, something like that,
it's going to be. So you have to know
some of these names, some of the concepts here so that you can answer those
questions appropriately. Thank you again for
your patience and time. I'll see you in the next one.
11. Labs Rightsizing Instance Type: Is welcome back to
the next video. In this video, we're going
to talk about right sizing. It's one of the
topics over here, which is required for
your certification called understanding the
concept of write sizing. When you look at this video, I have the picture
created for right sizing. This picture implies
that you have to have the right size for the
appropriate environment. For example, if
you are running on a wrong instance type for an application which
requires more or less, you either you overspend on the infrastructure or you underspend on the
infrastructure. Understanding the right size for you is going to be a key. As a AWS administrator, what are the key
indications and what is the instance type and
how can you determine? All those things is what we're going to see on this video. In terms of how to determine, you have to run something
called a load test and that load test will determine whether this instance
is right for you. But then what are the
instance types available? And how can you understand
the instance type? Because when you go over here, you go to EC two instance, the first thing you're
going to see is that the name and tag second thing
is the operating system. Now, here, the application
of the operating system, you decide the operating
system, and that's something which you have seen on
the previous video. Now, the third step is going to be about your instance type. Now, instant type is so
critical and important because you can either underspend
or overspend at this level. This is where you actually have to make a critical
decision about your size of your or your
system infrastructure of where your application
is going to run on. So before you decide
this, you have to take an input from your
application engineer who has, you know, designed application, who knows about the application, who has worked on
the application, and he knows about the uh, you know, what is the
CPU requirements, what is the RAM requirements, whether it requires a GPU on it. So likewise, you need to
understand a little bit from the application standpoint to
see if that fits the bill. Now, to decide upon this, you have to choose from a lot
of family of instance type. You can see that there are some, which is keep on going over
here. Sorry about that. So there are something
which is blocked, which is something which is not enabled for you.
There's a reason why. And there are some items
which is so expensive that the Linux base price
itself is 407 USD per hour. So it's just 40 $7 per hour. Um there are the cheaper
ones where you get it 40.52. 0.0 081 USD per hour. There are some thing
which is available for free trial as well. So um, these are some of the classifications in
terms of, you know, what is available for you in
terms of working with, uh, your instance type, which is very important
at this moment. So you have the generations, you have the current
generations one as well. So this is all current
generation ones. So you can see a
list of a lot of uh service type
available over here. All generation is
going to give you more of these instance type, which includes the old
generation as well. Current generation falls. So what are these? How are these used? Why
some of them are blocked? Is what we're going to
understand in this video, now, we're not going
to go deep about this, but, you know, something like an architecture which I've designed for you
guys will help you out. Now, instance type
instance name as in, let's just say, for example, a micro sorry, T two Micro combines of
two items over here. So it includes T two, which is instance
family and micro, which is this instance size. So this is a combination
of two things. Okay. So let's just understand
about the size because size is the easiest path that's wide is in the top. Size determines the
amount of CPU and memory. Okay. So a nano is
the starting size, which is the smallest
size of an instance. And it goes till 48 X large, which I've seen recently, where 48 X large comes either with a blown
up size of CPU, blower size of RAM
either of this based on what category
or family belongs to. Now, just give you an
understanding over here, just type nano over here. We can see that it is 0.5 B of memory and one
virtual CPU over here, and you have a nano here on t4g, which comes with
two virtual CPU and 0.5 GB of memory over here. Now, this is basically
the nano type. Now let's look at X 48. This is X 48 large. That's weird. I'm sorry, 48 X large. That's
why it didn't come. So here in this situation, you have 348 GB of memory
and 192 virtual CPU. In some situations, you get a whopping thousand 556 GB of
memory and 192 virtual CPU. So, likewise, the X 42 age, you know, lays from
192 CPU is for sure. You're going to get
it on all instances, but then the memory
size of this is about 384 GB till about
1,356 GB of memory size. So the charges for that
will also go up higher, so you can see $12
an hour when you are using this kind of
instance on your AWS. Now, price calculators going to help you on that as
well to determine the price calculation
of how it's going to be by selecting
EC two over here. And here as well, you have the EC two
instance type over here. So firstly, you have the operating system
selection and tenancy, we'll talk about it later. And EC two instance over here, and you have all the EC
two instance over here and you can sort it via family. So you have so many
families over here. So we will come to the
families right now. So going back to the
architecture diagram, you have the instance
family over here. What are these instance family? So free tire general purpose is basically a Tet
instance family. What this means, either you
get that on a free trial, free tire, sorry, and you
have the general purpose. So this whole Tet
tire is designed as a general purpose,
um, instance family. So whatever is aligned to it. So maximum, my view
is T two medium, T two large I've used. I've not used T two
other than large. Because we go into other
general purpose over here, which is A one, four, T three, T two po then T three A, and then M six, M five. I've used a lot of
MPIs over here, which is all related to memory relator and it has
good amount of memory on it. So I've used that. Then you have compute optimized. I CPU focused ones where you have a lot of
CPUs but less RAM, like C six, C five, C four. And then you have the
memory optimized, which is basically having more memory and less
concentration on CPU, like Z one, X, R four, R five, R six. Then you have the
accelerated computing, which talks about more about your graphical
processing unit, uh, F one, g4g3, then I N of 1b2b3. Then you have storage optimized. It focuses on more
storage items over here like I D and H. So by choosing the right family for your instance type
is going to be very useful for your
working with other stuff. So I've given you some
theory which which you can take a screen I'll
just put it as part of the documentation. So
don't worry about it. So how an instance is created by combining the family
and size of it. So an instance can start
from the lower standpoint. For example, if you take the graphical focused
ones over here, so if you just select
a G over here, like, say, for example, G two itself over here. And then it's not able to
find any match over here. Let's look at this. Sorry. G two. Sorry, for some reason, G two is not coming up.
Let me go to G three. I can see that G
three starts from XLarge and you can see 30 GB of memory and
four virtual CPUs. And it is charged for
Linux about 0.75. And then you have a
little more extra. It is four X large eight
large and 16 X large. So you can see the memory and
the CPU keeps increasing. So you have X large as
four and four X larger, 16, Ax larger, 32, and 16 larger, 64. So every bit of it, so you can see that there's a drastic increase 8-64
in terms of virtual CPU. That's going to
remind until 48 X. 48 x, we saw about 192
virtual CPUs, right? So basically, this is talking
about the CPU numbers. So the X large, the
Ax, the four X Adex. So you can see as it increases, then obviously you can actually understand
the difference now. The difference between X
large and four X large, what is in between
is the two X large, which should be around or double X or X large
is st the same, and that's going to be
about eight virtual CPU. If you can calculate that, you can get to know the
differences in them. Then when you get to 48 X
large, you will get 192. That's basically the
calculation here. But the RAM here will
increase or decrease. Sometimes they will
give you multiple options of ram size. Because it is more focused
on the CPU and the memory. So the same thing over here,
the G two is not coming, but the G three over here, you will actually
see the differences. So you can see that potential effective hourly saving
cost per percentage. So you can have saved this much. Alright, so that's pretty much I want to show you on this. I don't want to go into detail. There are so many things
which we want to talk about, but let's do it later, because as this is going
to be the first section, I don't want to infuse a lot of a lot of things
on this video. So what I'm going to do is
I'm going to go back to the slide which I
was presenting. I'm going to end it there. So this is the understanding
of right sizing. Write sizing is
so important that you need to understand
about the instance type. There are instance type. The
two types of instance type. In the instance type one is
the family, another one size. So combining the family
and the size will actually end up in the right
size for your application. But to determine what is your right size
of your application, you can compare it with
the existing environment, what's the CPU and
memory assigned to it. Or you can check with
the architect about applications performance
and how much RAM and CPU it's required, whether any GPUs are
required over the time, and we can basically, you know, have that
rightly configured on AWS. Thank you again for
your time and patience. I will see you on
the next video.
12. Labs BYOL and Other Licensing Strategies: Hey, guys. Welcome back
to the next video. In this video, you are
going to understand about the licensing strategies. Now, this video is going to have difference between
licensing models. There are two
models which you're going to talk about is bring your own license
and also include, include license with
your AWS instance. So this is the diagram
which I want to show you. So there are these two
classifications over here, bring your own license when you already have a license
with you and you have taken like three
to four years of licensing time frame with
a company or a corporate, right, like a red hat or
Microsoft saying that, okay, I have three years
of licensing left. So is it okay to
move to the Cloud? So you can say, it's really okay because you can
bring your own license, and you don't really
have to pay, you know, pay the license
inclusive in this AOB. Or if your license
is already over, you're bringing up new service, you're planning to
bring up new service with new licensing model because your old license is about to expire in course of
six to one year. Then you can actually
pay AWS directly, including the license fee. So you don't really
have to deal with wet hat or IBM or, um, you know, um, any other company, Microsoft or anything else, you don't have to
really deal with them. All you have to do is like
all inclusive payment, you can make it
to the, you know, you know, monthly basis
or yearly basis to AWS, and based on the plan
you're choosing. Now, I'm going to show
you some hands on towards this so that you can
understand and much better. I'm going to compare
these two things. So if you go for a red
t Enterprise Linux, and you can see that this is a subscription based RedHat
Enterprise Next over here. So you can see self
support one year. So self support means
does not include red hat customer support
does not include red hat or, um, atomic host or
physical system. So this means that you support yourself just that
should you will have a legal access or legal
use for this red hat, you know, server, which can
be used for Data Center. And that's about $383
US dollars per year. And then if you include
the satellite bundle, so you're going to
get like seven, 68. These are the this is the
total amount over here. In terms of standard
support where it's the hours of support cow between 9:00 A.M. To 6:00 P.M. And, um, you can see that the
standard business hours. So web or phone
support channels, unlimited support cases. So if that's the case, and you enable like all kind of, you know, bundle over here, including high availability,
resilience, storage, and external update support, you're going to get
a below 2008 55. And when you go for
a premium support, which is 247 and you can raise a variety
one and two cases, in that case, it's going
to be $3,131 per year. So this is the licensing
cost in terms of red hat. Now, when you go
to Windows, right? So for a data center server, it's very much straightforward. It's 6,000 hundred and $55. When you go for,
like, a physical or a minimum virtualized
environment, which is core based. Again, this is core
based license, and this is 1006 90,069, as in, like, the yearly payment that you need to pay for it. And then you have
the small business up to 25 users in 50
devices, about 5:01. So in this situations, you can actually see the
Data Center dition has this much feature and then the standardization
doesn't have that much. That's pretty much the data
center addition over here. That's the reason why
it costs so much. Now, when you compare
the same thing, when you are using your AWS, you can actually compare
this in two ways. You can go to EC two Instance and you can actually
launch instance. And then from there, you can
select the operating system, and then you can check it out. You can go to the previous, you know, our pricing
calculator over here, click on estimate,
and you can actually search for EC two instance
and then click on Configure. And here you will have the, you know, EC two specification
on the top itself, and then you have
operating system. It gives you a very simple operating
system item over here. So you have redeye Enterprise x with SQL. We don't want SQ. We just want the HA
availabt if you select HA, you will billed $27 per month, as you can see over here. Now, if you select Windows with just a normal Windows
server, it's about $3.80. And you can see
that there is also a Windows Server SQR
enterprise edition as well, which is $1,203.40. So, likewise, you have different stuff over here
which you can select from. But if you want, real
operating systems, you can actually go to EC two Instance and you
can actually see uh, how much it's going to cost you. The costing of this will come when you actually
select any of this, so you will actually
see that over here. So as of now, it's selected
AMI free eligible. There are this free eligible where in which you can
actually select them, and there is the support which comes with
ISQO server on it. Now go with something like Enterprise or red it
Enterprise Linux. It's like red hat enterprises, you're going to get
this AMI over here. And here there is one reedit
enterprise nine with HVM, which is free eligible, and then there are two
others which is not. One which SQL another one with high availability,
and click on it. Now, it will say that there's configuration needs to be changed to just confirm that. And then when it loads, you're going to get
some details about it towards what kind of items
it's going to have towards it. I'm just going to pause the wind and pause it once it completes. I'm not got stuck, but I just refresh
the page it worked. So this is high
availability with the Red Enterprise Linux
nine version and you can see that comes with SSD volume. And pretty much that's
information you're going to get. So I'm not going to change any other information over here. It's going to be
like D two micro. And then when I look at
the charges over here, it's not mentioning
the charges in here, but this will be
actually charged, okay? So that's going to be
charged in this level. So it's not going to be
of free dire, for sure. Because you're going to
get charged of what it's mentioned over here when you select identifies
Linux with haha. You're going to charge $27
just for the operating system. So, likewise, you know, bring your license and the license included
is going to be a game changer for you because you don't really
have to deal with anyone. You can actually deal with
this the complete billing. There's also one more thing
I want to show you is AMIs. AMIs basically MI is
on machine image. Now when you click on the AMI, you can actually go to that EC two instance only directly. And then from there, just
click on Launch Instance. If you see that, there will be operating system names here, right after that, you
have browse more AMIs. When you click on it, you have three options
for you to browse. One is the R AMI which you have created
yourself using a template. And then you have
the marketplace AMI. So Marketplace AMI
is something which is approved by, you know, AWS. And this is where you Oracle Red Hat comes in and then they put their
official image in here. And as a AWS user who's using it can leave a
review about that image. And then you have
community AMIs. So you have Community AMIs, which is community image
which is that is public. Therefore, anyone can
publish this AMI by just, you know, selecting the catalog. And as you can see over here, these are the owner information
over here or As name. Amazon itself is
doing this image. Now, everything is cost base. There are these free AMIs which doesn't require
you to pay anything. There are this
paid ones as well. Let's just search for something
very popular over here. So in terms of the
AWS customer reading, we'll just do that. N two WS backup on recovery AWS, free trial, bring
your own license. This tells you bring
your own license, which means that you may need to enter your licensing
at some point of time. And then you have
other services, and then you have the
SAP business as well. You can see that there
are other services also. And there is this
Microsoft 2019. So if you just go
to the details, you will see the
overview here and how much you are supposed
to pay for an hour. So this tells you about the
pricing index over here. You can click on pricing. Actually, you can see some
information about it. So now, pricing differs
based on the uh volume, sorry, the type of
EC two instance. Now here this pricing of 3.3 84 Is talking about when you're
using a M four large. But if you're using
a T two micro, which is free instance, T two micro with this one, it's going to charge 0.016/hour. Now, you can save it by, I will tell you the saving
tips later point of time, so we have not come to that, so we'll talk about it
later point of time. Um, so this is pretty much how you can understand about
a specific product, and, you know, this
is a trial version. Now, this is paid. So here, it's a C five a large, A two X large, sorry. And in US East. So that's going to be per hour price over here
if you're going to use this particular AMI
or machine image. So, likewise, you're
going to have different machine
images which are some are free, some are paid. You can see that this
is even cheaper, but this is using
T three medium. T three is like a standard class of Easy to instance type. So medium it's using the
size of the instance. And you can see that it
is pretty expensive, I would say, from my
opinion for open sous. But likewise, you're
going to have people coming in from the
company point of view. This is from open sous itself. But, likewise, you
will see um, images, which is approved by the vendor, and this is an open
community image. So the price is nine
differ over here, so you can see that there is this person's name over here. You click on it, it's
a verified provider. So if you select this image, you will actually incur a
little bit of cost based on, you know, whatever
the price it is set. Sometimes it could be free. So you have to pay
for the instance. And pretty much that is
something which you need to do. And now, when you go
for organization, they wouldn't be actually recommending any of the
community images because they would either go for what is
available in the existing um, you know, machine image
or they would go for, you know, an approved image from AWS and from
the marketplace. So they wouldn't go
for community AMI. So do you remember that. So that's something which
is quite solid and, you know, recommended
to go for it. Though it includes
some licensing fees, but still it'll be worth the time because you
will get support and maintenance from the support
team in case you if you need some help on some of the issues which has
been faced over here. Now, there's another category over here called Quick
Start AMIs which talks about some of the AWS
recommended, you know, AMI. So this is basically what you can see on the EC two
instance launch instance, when you see the
default images, right? Now, where this is coming from where these images
are coming from, these are coming from
Quick Start AMIs. So this is basically the
base operating system, some are free, some are paid. But then mostly
used items are over here so that you can quickly
select it rather than, um, you know, going and
browsing through the AMI. Um, the marketplace AMIs. So that's the difference between the quick start AMIs and
the marketplace AMIs. So this is pretty much what I want to show you on this video. I hope that you understood the difference between
bring your old license and the license which
is inclusive in your, you know, the monthly payment which you're supposed to pay. So for example, uh, my conception over here
over the last two months, it also includes the
subscription of the services, which I have used as part
of my EC two instances. So that's going to be included
as into my monthly bill. So it's going to look like this. All right, thank you again
for your time and patience. I hope that you understood
this topic pretty much, and I will talk to you
on the next video.
13. Labs Fixed Cost Vs Variable Cost AWS Pricing Calculator: Hey, guys. Welcome back
to the next video. In this video, we are
going to understand about the concepts of cloud economics. Now, as you can see on
the document over here, so we have the um, you know, task quando four, which says, understanding concept
of Cloud economics. And within that, you
have multiple topics. You're going to get a lot of
questions from this topic, especially related to
bring your own license, fixed cost and variable cost. Something about the benefits of automation and also something
about the managed services. What are the Okay. So even though we
discussed about some understanding over here
about the managed services, we're going to understand
very pretty much deeply over here about
managed services. And this is again going
to be an overview. We're not going to go inside each of these managed service, which is really not required
for this at this moment, for your certification
at this level. So, anywways we will understand we will have
a deeper understanding about that later point of time. Now, what I want to start with is these two
services over here. I'm sorry, these two
skills over here, which we are going
to understand about and we're going to do
some hands on ASPA. So this training is going to be both architecture and as well as it's going
to have hands ASBO. So let's just get on with it. Now, understanding
Cloud economics is basically understanding about what is the
expenses which is going to be incurred
towards your customer. Now, previously in the
previous architecture, we have identified the
adoption framework, and we have uh, worked with how you're going
to migrate your, you know, application from on prem to
your cloud cloud data center. Now, you have understood
these two aspects. Now, this is where
the customer is happy that he knows
that the pillars, he knows that, you know, the framework, and he
knows how to migrate it. So you have told
the customer about next thing you going to tell
the customer about it is, like, what is the
economics all about? Like, how much money
you can save in terms of fixed cost and variable cost. You're going to save in terms of licensing and migrating
to a managed service, how much it's going to
actually save your customer. So these are something
which is very important that you as an AWS engineer, expected to deliver
this to the customer. And that's the reason
why this is part of your certification course
at the foundational level. So to do this 100 percentage
without no issues, you know, deliver
this to the customer, you need to understand
firstly about the fixed cost and
variable cost. I have a small example for you as well in terms
of this hands on. So now, these two
topics are related. That's why it is seen
together as a section. Now, here you have the
understanding the role of fixed cost compared
with variable cost, and you have understanding cost associated with
promises environment. Now, as you can see over here, fixed cost is nothing but we
just spend on, um, you know, specific amount of,
um, you know, device, procurement, like a hardware, and you kind of have to
pay them in advance. And you don't have
that requirement. Like, for example, um, if you go and search in
amazon.com and you want to buy, um, a storage size of, you know, a server,
hard disk drive storage drive or storage size of two TB or three TB
or four TV and you feel like that's the maximum
of content you need to have. Let's just say, for example, one TB of a data or
three TB of a data. You will not be able
to customize that. You will have a specific limit. You can see that your
Western digital red plus is about four TV. If you're buying a four
TV hard drive like this, you need to buy
another hard drive, which is four TV as well because you need
it for redundancy. You need it for fail over fault tolerance and
stuff like that, because that's how protection
systems are designed. So you should have a backup
of the similar data. So you need to buy
two of these four TVs and both the four TB
you'll be actually using only one TV just
because there is nothing in the market
which has for one TB, you have to buy the minimum one. Either you go for
two TV or four TB, but as of now, only
four TV is available. So you have to spend
like $99 for this, and you have to
spend another $99 to have a replication for this. So it is like $200 for this. And if you don't have
a choice, right? So you have like eight
GB of data over here, which is around $200 over
here. So you can see that. This is pretty good,
hard disk drive because you want to buy a hard disk
drive for your server, which is pretty much expensive. But this is completely
different when you go for actual
server storage device, that's going to be
completely different. So we have different
websites from the vendor, and they will actually
give in the quotation for that later that something
which is very different. So we're going to compare
this as an example. So Um, so you can see that this is
configured with a rate. Let's just say, for example, assume that it is $200 you're giving for
eight TV of data, but you're actually
using data of two TV. Okay. So now we have
a superb tool in your um AWS service called
billing and Cost Management. When you enter billing
and cost management, you're going to get this tool
called pricing calculator. You can also access
this pricing calculator directly by
calculator route AWS. Now, when you access
this UR actually goes to the pricing
calculator over here. So now this pricing
calculator shows you the estimated cost of your
architecture solution. Like, for example, we are talking about storage
at this moment. So now you are going to
purchase it on on prem, and as I told you, storage is not available in a
customized size, okay? It's only available in two, four, uh, eight, 16. Likewise, you have that
which is available. And the second thing is that the vendor you may
be going with, they want to sell a higher
product for a lower price. So they will always pitch
in for a larger storage, even though you're not going to actually use that
much of a storage. So you kind of have
to pay, you know, one shot and you will
be actually, you know, getting a storage
device which has, like, a huge storage. But what you'll be actually
using is not that much. So you'll not be using
100 percentage of that. So you kind of have
to spend that amount, and you have to spend it
early as well before even, you know, anything else happens. So, likewise, um, there's
a lot of problems there. There's a lot of
complications there. But when you work with AWS, that's where the
variable cost comes in over here in the
cloud computing. Now, using this
price calculator, you can actually calculate it according to
your requirements. So for example, let's grade
an estimate over here, and I select North
Virginia as my region. There's a reason why I do
that later point of time. I will tell you why
I choose region. And here I give S three
as a service name. So here I just click
on Kid figure, and you see that there are so many classifications
under ST. So for Lambda, you have, infrequent access so you know, you have Express also over here. So likewise, you have so many. But what by default selected is the standard and data transfer. So these are the two
things which is used. So let's just say, for
example, per month, you are going to upload
like 200 GB of data. So it's not like you
will be actually uploading as you're
cating an application, so you will get new
users coming in data of the users will be loaded into your database or
your data drive, which will be using
S three here, and it will increase
gradually two TV. It's not like you're going
to get it in a month's time. It may take a year for it to
gradually grow to two TB. In those situations, you can say 200 GB per month is
what you are going to. You are expecting that the
data should be available. What kind of how much data
will be moved on ST standard. The specify the amount of data
which will be moved, yes. So, this specific amount of data is already stored
in ST standards, so that's pretty much this one. And put a copy post list, which just run, you
know, frequently. So I would say all of this. Um, so I have to
mention it by number. So I'd say like 100 times
you will get something like this and then get list and all this 50 times or
something like that. So here, the data data
returned by S three selection, the data return and
then data scanned by S three selector.
So it's not much. I guess, like ten
GB to 20 GB range, you will have data scanned and returned
by S three select. So pretty much that and then you can see this
calculation over here. So you have the
price of 200 GV and you actually have monthly
expense of $4. I see that. So you have actually, you know, spent $200 on ATB drive, which is a long term
and you won't be actually using that
that's very big thing. So you won't be
actually using ATV Uh, at the start of the stuff, you will be actually
only dealing with 200 GB of data and then it
gradually grows, right? Now, you can see how much
money you can save over here. So this money, the $200, if you had put that rather
than putting it upfront, and you pay like $4 every month, you kind of earn the interest in that money because you will not be actually
spending the money, you'll be actually
having that with you, which means the interest will
be with you for that money. You could have invested
in something else. And actually the total expenses of S three standard cost is $4, including all those put and get requests and
stuff like that. Put and G is not much as
you can see over here. And in terms of data
transfer feature, you can add some data
transfer feature like Internet free, and you can actually put in, how much of a data transfer
you expect over here. And you can actually see
the calculation over here, just say one TB per month and a data transfer to
Internet again on TV. You can actually see that
it is $92 over here. But this is one TB terabyte of data transferred
over the Internet. Okay. So now here, the data transfer from the inbound data transfer
is completely free. So if I'm not going to export this much of data
outside to the Internet, then I'll not be actually
charged anything. So the incoming data
is completely free. So I can take in
data to S three, but I can restrict people from accessing
my data from S three, and I can use internal services. Like, for example,
I can use another, um, you know, another region. To capture that data. So if you just go for that
and I'm just going to access this 200 GB of data, and you can see that I incur $4 within all region and even if I transfer it
over the Internet, it's not going to
be much 18 USD. Now, this cost of 18 USD, you would incur no matter what, even use even if you buy
this hard disk drive, you still have to pay for
the Internet charges. The Internet is not free when it goes outbound or inbound. This charge would eventually occur to you as the
Internet charge for you. So that's the reason
why data features, data transfer feature is
separated from ST because this includes the Internet
fees of you transferring data. But the good thing
is that Internet is free when your inbound data, and outbound data is
charged because you are actually uploading
it to somewhere else. Now, you can upload it to Cloudfrind which is again, free. A Crowdfrind as I told you, is a cache server, and
it basically, you know, caches the information
and sends it to the customer who's
browsing it for a faster, you know, accessing
of your website. Now with all these fees, you can see a total monthly
cost of 22 or 22 USD, which is really cheap when you compare with this guy over here. When you start consuming more amount of data
when you are coming to a level where you are actually utilizing
two TB per month. Let's just say two TV per month. You can actually see how much you'll be spending per month. You'll be actually spending $47. You're not even come close to what you're
spending over here, but it's a monthly
charge I know, but it will keep on
recurring every month. But there is always
a way that you can um archive the data which you're not using and send
it for deep archival state, there are other
services which will hold your uh, you know, data even cheaper than S three, because that will be
in D archival state. We will talk about those
services later point of time, but you can archive
those services as well. Likewise, this tool is so important for anyone who is starting up their
career with AWS, where you want to understand
about the services, how it's going to impact we
are going to use a lot of this tool to understand about or hosting this because
as an engineer, I've used this tool so many times when I'm working
with a customer and their needs and wants is all to save money and save costs and automate
most of the process. So to do that, this tool is so good and I'm going to teach you more on this
later point of time. Coming back to this
particular action item, I hope that you
understood what is a fixed cost and what's
a variable cost, and you understand the expenses which happens at the
onPmiseEenvironment. Expenses includes um, purchasing of purchasing and maintaining
of physical hardware, network equipment,
storage devices, power and cooling, and in terms of maintenance
and staffing, you need to hire IT team which actually needs
to be dedicated and they should be actually looking out for your
data center service, which you actually don't need
in terms of cloud computing because we're going to see on the shared responsibility
later point of time, but you will be
amazed to know that AWS takes care of the service where you're
going to host your services. So you don't really have
that overhead of hiring IT team or something like that to manage your data center. And then you have
data recovery and backup cost which
basically tells you that you can actually recover disaster and you can
recover your data, and it's coming for free because if one available
T zone goes off, there's another available
t zone which will shift your data and this data is transmitted between these
two available t zone. If you're opting or
not, it doesn't matter, but the data will
be available on two available t zone by
default from Amazon, so you don't really have
to worry about that. But when you are operating
on premises, as I told you, everything you buy, you need
to do plus one. Plus two. If you want to keep
redundancy on that, if you want to go for a
different grade level, you need to go for
plus two or plus one by default for mirror ring. At least that mirror
should be there. That's the reason why we have, you know, why we go for variable cost and cloud computing. Thank you again for watching.
If you have any questions, please leave it on the
questions section. I'll see you on the next video.
14. Labs AWS compliance: Hey, guys. Welcome back
to the next video. In this video, we are going
to talk about task two. So firstly, we're going
to understand about the AWS cloud security
governance concepts. Now, in this particular task, we were going to
get a knowledge of AWS compliance and
governance concept. Going to understand
about the cloud security like encryptions and
few other services. We are going to understand
about the capturing, locating logs using
Cloud security, I'm sorry, associated
with cloud security. So there's going
to be another set of services we're
going to see there. So now here, as you can see, we are going to firstly take these two topics because
they are more less related. So we're going to see it as a specific, you know, section. And then I'm going to go with other topics
later point of time. So here, identifying where to find AWS compliance
information. For example, AWS artifacts. So now, compliance is
very important for us, and we know it when the company goes through audit session or when you exhibit all your
compliance in your website. So you need to make
sure you need to find those compliance
certificates from AWS because that's
your partner now. That's where you're
hosting, and there is also some geographical
location based compliance which you need to showcase, as well as like there's compliance for industry
standards as well, like healthcare,
finance, industries. So likewise, we have
certain requirements, so we're going to see more
in detail about that. So let's go to my drawing here. I have two pictures
right for you guys, and both the pictures talks about the AWS
compliance information, which is AWS artifact and the geographical industry specific compliance
requirements. So now, these are the two things we're going to see
on this video. So we actually have a demo in terms of this
particular service, which I'll show you
just in a moment. But what can you get
from this service? Well, AWS artifact is
a self service portal for accessing AWS compliance report certificates
and agreements. It allows consumers to download
compliance documents like SOCs reports and also
ISO certificates, PCI requirements,
and certificates. So, likewise, you can
actually use that for consumers can
use AWS artifact to meet I'm really sorry about the background
noise if you can hear me hear the
background noise. I'm really so sorry about that. To meet internal compliance
or auditing requirements. So these are some
of the requirements why we need these report for. Let's just go to the demo
and then finish that off. Just go over here and
search for artifact. So now just middle click this and then should open
it on your window. I've already opened it because
my Internet is a bit slow. Now, here is the home section. So this is two things you can do over here,
which is very important. You can view agreements,
you can view reports. Now, when you have
created organization, which pretty much right
now, you wouldn't have. So your agreements is going to be for both
your item sorry. Both your account as well
as your organization. So firstly, let's look
at the agreement. Now, we have not agreed to
any agreement at this moment, so you can select an agreement,
and then you can accept. But there are certain agreements based on a
geographical location. For example, you can see
New Zealand Australia. I think this is pretty
much general, I guess. So you can read the agreement
to understand about it. But once you've created
an organization, you're going to have
another tag over here, which is going to be about
your organization agreement. So you have signed into a management account
of an organization. So you just have to create service rules and
give permission. So then you will see the
organization agreement. So we'll do that later. But if someone asks you to download it or if
there is a question on where you will actually
find the agreements for your, um, you know, your SOC report or ISO
certificates or PCI certificates, then you can
actually say them in the AWS artifact under
reports or agreements. So so they will clearly ask you where you
get the certificates. You can actually get
the certificates from reports and everything, where you will sign an
agreement if they ask you under the agreement
section of AWS artifact. So now here, you can search
for any kind of a report. Like, for example, SOC report, you can just search
for SOC over here and you kind of get the
report. Over there. So now, this is SOC two, which is type two
report evaluates the AWS control and meets
the criteria for security, availability, confidentiality
and privacy of American Institute of
certificate, public accountants. So, likewise, you will get that, select it and then
download the report. So this is Pi default says that, um, uh, the reporting
period is March 2024. That's a financial year
last financial year. So, you can clearly say that this particular report
is basically the um, you know, some AOC
type two report. So this is basically you can
provide it to your auditors, as well as put it on your
website saying that you are quite compatible wherever you
are hosting your content. So this is basically what is AWS artifact service
and service it provides. Now, when you talk about
the second point over here, which is about your geographical industry industry specific
compliance requirements. Now, geographical
compliance is basically different regions or countries will have specific regulatories, like data privacy
and security acts. So you need to comply. Your hosting partner should
comply with those things. So the question would be like, how do you know whether, you know, you have a GDPR, which is your general
data protection regulator for a European Union or
a California customer, sorry, Consumer Privacy Act, which is CCPA in United States. So these are some regulatory
compliance which is required by AWS to be provided to all the
customers for audit purpose, and these are something which
is geographical compliance. And there are certain things
like industry specific. For example, you need HIPAA for healthcare for
finance, we need PCIDSS. So these are something which
needs to be required if you want to host your application
on a cloud provider. So if a finance or banking
application comes into AWS, the first thing they
would do is they'll check if this is
compatible with that. So that they can host their
application on this platform. How do you make sure of that? There's another tool
over here which you can actually Google for is
called AWS Compliance. Now here, you can actually
see that whether you can use this for
your requirement. Now here, PCI DSS, which is for finance, HIPAA, for your um healthcare sector, and then likewise,
you have GDPR, which is for your
European Union. Likewise, you have all those satisfactory
compliance requirements around the globe. So um, AWS has them. That's why they have
put it out here. You can also read
the white paper to get to know about
that much more. You can also go to the
compliance website page and you can actually look at
all the compliance which it is part of. So you can see how many
compliance it is part of. So you can see that
in the regions as well the specific
regions like America, you have all those certificates which is required
for you to host, you know, your items over here. This is for Asia Pacific. You have ISO 2000 for India. You have um, IRAP for Australia, and then you have Fintech
certification for Japan, and then you have this Garmon
program access security for public out
service for Japan. So likewise, they
kind of renew it and make sure that they
have all these things before you go ahead. So you have to tell that in the examination that you will
check all these, you know, region wise by going
to AWS Compliance, portal, and that's where
you will actually get it. So to get to here very simple, just Google four AWS, uh, AWS compliance, and you can
actually get to this page, and from here, you
can actually see this compliance
program web page. That's all guys for
you in this video. I hope that it was helpful
for you to understand these two concepts and how
to navigate this portal. Thank you again for
your patience and time. I'll see you on the next video.
15. Labs AWS Detective: Hey, guys. Welcome back
to the next video. In this video, we
are going to talk about detective AWS detective. Now what is AWS Dective? Detective helps you investigate security issues by
analyzing and visualizing security data from AWS services like Cloud trial, VPC flow logs. We will talk about all those
things later point of time. So Cloud trial is basically
looking at the trial of how the process or the process it took for it to
reach. That's Cloud trial. VPC flow logs talks about how, you know, the connection
flow is all about in VPC. As you know, VPC is nothing but your
virtual private cloud. So it's a network setup. So anyway, we'll
talk in detail about that later point of time what
is VPC and stuff like that. As of now, you
just have to think about you are
collecting data here. Okay? Now, what data we are
collecting the service data, like how the service
is flowing through, how the the connections are coming through, so
it's all about that. Detective makes it easy on
identifying the root cause of a service issue,
or security issue. Let's talk about an
example over here. Suppose you're receiving
an alert from guard duty that the EC two instance has been making suspicious APA call. This is what we discussed
on the guard duty, right, where it detects all
the suspicious APA calls and it looks at
your application and how, you know, you secure
your application as. Now, instead of making a manual shifting
through the locks, you can use the AWS detective
to automatically compile and visualize the relevant
locks like cloud rail events, VPC flow logs, and then detective helps you quickly
trace the attack path. And identify the
compromised resource. These are something
which the AWS detective does and basically detects that. Now, you remember this thing when a question comes in
towards AWS detective, is that what is the
purpose of deductive? Now, Dective a purpose is to
give you visualized data. From looking at cloud trials, PC log flows, rather
than doing it manually, we understand from these log to understand the
path of attack, okay, and identify what
resources are compromised. So it gives you the
visual, um, you know, security diagram or data, which will tell you about this. Okay. Let's just
quickly review this on the hands on session
and then we'll look at the features
and benefits. So deductive All right, so AB is too active over here with this
magnifying glasses. So that's going to
take some time. Alright, let's look at
the features as it loads. So features are basically
log aggregator. One of the important
things is that it basically aggregates
log from what is aggregation aggregation
is like caratic from different sources
and understanding and analyzing the logs. So here is automatically
pulls in logs from multiple AW sources to provide context for
security issues. Interactive investigation
enables you to explore security incidents
and visualize data flow, and it also gets integrated with guard duty cloud trial
for deep investigation. And what are the key benefits? Simplified investigation allows automatic correlations
and visualization data, makes it easier to
trace the incident and makes it easier for you
to understand as well. In terms of faster
incident response helps you identify and respond security issues faster by providing comprehensive
view on data. Let's go and look at this. So you have a good video
here over here from AWS on how you see
your AWS detective. And what are the charges for
log processing over here? You can see that zero, 2000 GV's going to be $2 very cheap. The next 4,000 GB is going
to be about $1 per GB. So it's like $2 per GB. I'm sorry, I said too cheap, right? I don't think it says. So first zero to 2000
GB is about $2 a GB, so I mean like thousand means
it's like $2,000, I guess, forum thousand GB of
data, um analysis. Um, and over here, you can see some
benefits over here and do we have a free trial? Yes, there is a 30
days free trial on this Amazon detective.
That's amazing. So it's useful for
us to get started. So just click on this
and you can actually, um, enable directive
for this account. You just go over here, enable Directive
for this account, and then you can actually enable Dective for this account. So here, delegate
administrator is not needed. I mean, if you are
adding another user over here as we are in
the organization, that's why it is coming. Unless you don't see that, so you can just go
ahead and enable it. These are some
notifications about this, so we're just going to
close it over here. All right, so this
is the directive. Using ARBs organization, you
can enable the you know, you can delegate
account and you can actually enable for the
entire organization, or you can just do
it for yourself. So that's pretty much there. So here you can also invite other members
by inviting this. I have just one
account over here. That's the main account, so
I'm just going to use that. So you can ignore it. Integration, so you can choose the integration over here
where the services will be integrated with
the security lake this is the raw log
for your data source, so you can enable the
security lake over here. So this integrates with security
lake and gets the logs. And then in terms of general, you can see some general
information about this. You can also see that disabled Amazon detective over here, so this will your 30 data will end if you're
disabled that's very nice. Um, over here is the service. You can find actually
the investigator, the finding groups over here, and you can actually see the geographical location
of investigation. And then here you
can see finding groups and investigations
over here, which is the current
investigation, and then what do you
want to search for? So, likewise, you have
all these options over here under detective. Anyway, praise, thank you
again for watching this video. I hope that you
understood how to work with director
and what's the use of directive and how to interact
with Directive on this one. So again, no questions on are we going to configure a directive
or something like that? So it's going to be
about what is deductive and it could be about
one of the features of dective or key benefits of deductive or what the
directive detects all about. So the questions
would be around that. Thank you again for your
patience and time. Is you on
16. Labs AWS GuardDuty: The guys welcome back
to the next pero. In this period, we are going
to Rock Wams on guard duty. What is guard duty is that it's a threat detection service that continuously monitors your
AWS account for malicious, and unauthorized
behavior, including anbal API calls or
compromised instances. So so guard duty is basically a service
where you can remember. I like it does the duty of
guarding your instance. Now, what's the difference
between shield and guard duty? Because both are
somewhere related. Shield, you can consider
as a reactive approach, guard duty as a
proactive approach. So, um, shield is protecting you from the
incoming attacks, okay? It's mostly reactive stuff. Guard duty over here
is a proactive one where it detects
threat even before it happens where it looks
for in your AWS account for malicious or unauthorized
behavior including, um, you know, any
kind of API calls which is happening
around your account. That kind of API calls, which happens through Amazon, could say that the computer
which you've been using to connect to AWS would be
compromised and that kind of AWS, call is coming
from such computer where you're accessing your AWS. Likewise, guard duty is guarding your existing
information on your ST, your RDS, IM, your VPCs, under VPC EC two
server instances, any kind of container services. So all services are
monitored using guard duty. Okay. Now, let's talk about
the real time example. Imagine that if you're running a financial
application like a banking system and your guard duty detects
an unusual APA call. Such as access to sensitive data or
unfamiliar IP addresses. So guard duty raises an alert
helping you to investigate the potential
unauthorized access and take preventive action
against such breaches. Now, guard duty will be looking for mostly on the
APA calls and how, you know, communication
comes through. Um, so this is more
or less in that area of security or securing
your application. Let's talk about a couple
of features over here. So one is the
continuous monitoring. So monitors the cloud trial, VPC flow logs and DNS logs
for suspicious activity. Normally detection, where
in which it uses a mission learning to detect anomalies in accounts and
network activities. And also automated alerts is something which is also
one of the features which you can include
where it will send an actionable securi alert
when the threat is detected. So you can connect
this with even bridge, and then SMS will send it
basically a MS and slack and then you can also connect it with Lambda function to
trigger a function there. In terms of key
benefits out there, so it's basically a
proactive threat detection directing the potential
threat early, such as compromised instances
or malicious API calls. And as well as, like, no
infrastructure management. So it is a fully
managed service, so you don't need to deploy any kind of maintenance
infrastructure. So let's just quickly
look at the way it looks. So God Duty. So that's a service out there, so it just middle click not, and um just a bit slow.
Sorry about that. All right, so now I
just unposted my video, so it took about 5
seconds for it to load. So here you have the
maison guard duty, intelligent threat prevention
for account and workload. So both prevents accounts, as I told you, if someone
is doing APA call, command line call or
anything like that, and finds it suspicious, it's gonna trigger an alert,
security alert and that. So it's not just about
the service level stuff, but also about your
account level stuff, like how people
are accessing it. And then guard duty, all feature experience threat
deduction capabilities in AA Blaze and
Raman guard duty, mal word protection, and in
this one it's S three only. So either go for all
features or S three only, so your pricing will
be depend on that. Now, you can see that features and benefits is a mal word protection
for S three, and then easy to deploy and scale up to date threat
direction protection, likewise, you just go in
overhear get started. Now, here you have
some permission roles. We will talk about this later. So this is shat permission between other services
and this particular one. But we're going to use the d41, so we're not going
to worry about it. So protection plan over here is enable guard duty for
first time allowing, enabling guard duty
protection plus, except runtime monitoring
and malwPtection for ST, both of which can be enabled
as per the console and API. So all we have to
do right now is to enable you get 30
days free trial, so that's very good news for us. So you're going to get
30 days trial over here. If you want to delegate an
administrator role over here, you can put in the ID,
but I'm just going to go with enable
Guard DUT over here. I'm going to use the
default ID for that. So going over here, you can successfully enable
Guard Duty over here. Now GuardDuty is enabled, so all it does is it directs my account access and see how it's going to
be working with it. So there's nothing finding
over here because it may take some time to analyze
my login information, APA calls and stuff like that. So pretty much nothing much over here to report
at this moment. So you can also enable
runtime monitoring. So remaining 30 days for FATgate and EKS EC two instance as well. So runtime monitoring will
check it in the runtime, which means that when
the service is running, so it's going to check and, you know, it's going
to work with that. You can also enable
service for EKS, which quarantine
service Fargate, which is ECS, which is
your container service, you have the EC two,
which is server service. Okay, so that's
pretty much that. And you have the option of enabling malware
protection for S three. So this is going to
enable notification for you if someone is uploading
a malware into your S three, likewise, RDS protection is
also there as part of this. Lamptection is also there
as part of guard duty. Guard duty is one
of the important security stuff and you may get definitely one question
from guard duty for sure because of the
importance it has been given. So again, guard duty is a threat detection service
that is continuously monitoring your AWS account for malicious and
unauthorized behavior. And this is something you should remember
when you go for it. And this is a proactive service where it detects it early, the threat early and it is applicable for all the
services in that in your case. So it is applicable for RDS, which is your relational
database service or we can call it
a managed service. It applies for Lambda, which is basically your
fully managed service, and then you have the
protection for S three, EC two and runtime
protection for, other services like EKS, EC two, which is a container
based managed service. So Fargate for ECS again, um, then EC two. So, likewise, um, you
have protection from for, I mean, yeah, you can say from malicious content and stuff like that and
account access as well. So that's something
which you can actually use guard duty for. Thank you again for
your patience and time. If you have any questions, leave it on the
questions section. Do remember to refer
the uh cheat sheet. If you want to take a quick
look at all these services, it has the 22nd read
on all services so that it gets you the important
points before your exam. Thank you again. I'll
see you in the next one.
17. Labs AWS Inspector: Hey, guys, welcome back
to the next video. In this video, we are going
to talk about AWS inspector. Now, let's talk about the
usage of this inspector, how we actually use
this inspector. As this diagram says
a lot of things, now, inspector is an automated
security assessment service that helps identify
vulnerabilities and deviations from best practices in your EC two instance
or if you're running any containers in your EC two instance workload and applications just part
of your EC two instance. Not just the EC two instance, but also what has been hosted
on the EC two Instance, also it does inspect. Now, how do I
remember this name? The word inspector is
itself is a very big clue. It tells you about it does the inspection of
the vulnerabilities, your EC two instances and
look for your application for any vulnerabilities or CVEs in that particular application. Okay, so this is how I
remember the word inspector. So when you see
that on the exam, you will see inspect inspector. So what it does, it inspects. What it inspects, okay? Inspects EC two instances
for vulnerabilities, CVs. Now what is CV, CV is a common vulnerabilities
and exposures, where it gives opportunity
for the exploiters like hackers who can exploit these vulnerabilities and then enter into
your application, gather details about it, and ask you for ransom. We don't want to get
into that situation. We don't want to
put our companies which we work for
in that situation. So it is always good to use
the inspector to scan it, understand what
vulnerabilities you have. So what it does is, it not just scans the
EC two instance, just taken care by Abs, but it scans your application and the containers
which you're running. And it basically gives you
the vulnerability report. It looks for vulnerabilities, security issues,
unpatched software. So those are the
things which actually looks for and tries
to eliminate. Alright, so let's talk about key benefits over here quickly. Continuous monitoring is one of the key benefits over here, ongoing scan that helps ensure that the new
vulnerabilities are misconfigurations are
detected and as they rise. So it's not just a free tool, so you kind of get
charged for it, but uh, basically what it does it keeps on continuously monitoring
your application, and it gives you any kind of alert trigger in case
of any kind of issue. So you can connect this with
even page and then send it to SNS and then send
it via SMS or Slack, or you can write a
Lambda function and then you can route it
through Lambda as well. Likewise, you have
different features. And we're going to see
all these things is going to do the same thing from
the next video onwards, because these are some
security related services which runs on the background. So what we have seen before
is the WAF and shield, and WAF and shield is basically security file which
protects you from issues. And these are what we're going to see is
the security stuff, which is like Amazon inspector, which is after or it's a
proactive measure to contain it, so it runs on your
system and then, it does proactively contain it. So the shield and graph
is a reactive one where when attack comes in, it is going to be
a reactive stuff in terms of reacting
to the attack. But these are proactive stuff what we are
seeing right now. So this is pretty much what I
want to show you over here. Sorry over the background no is something I
cannot control. You can just go and
type AWS inspector, and then you're going to get
Amazon inspector over here. And here in the
Amazon inspector, you are in the home page, and you can see
that how it works. So basically the
inspector gets to inspect and what did
inspect is your um, you know, um, here, I guess, enable the inspector and then
automated work, discovery. I'm not sure if you can see it, but this is what I told you just now, so
don't worry about it. So you just have to
click on Get started. Then you have to give
permissions to it towards what kind of account administrator
rules you want to give it, and then you can just click
on activate inspector. So this is going to be activated on the default
account if you're not specified anything over here as a delegated administrator, so it's going to use
the default one. So just once you activate it, you have a 14 days of trial, which is free, and you
can actually see that there is no nothing to be
working at this moment. Maybe I have to start
my EC two instance. I have my ubernuts on
this EC two instance. Um it's a feedback. I'm not used it only. So if I start the
East two instance, maybe I would get some
vulnerabilities. I don't know. So let's talk about starting or I'll start I'm
going to be charged for it. Let's get started one
of those instances. So it's going to take
a while to start it. And then we are going to
look at this vulnerability. If there's anything
as a vulnerability. As I told you, um, it comes with 14
days free trial, and then after that, you're
going to get charged for it. Okay? So you can see the
monthly projection cost over here in the usage, and basically by the scan type you can actually see
EC two scanning, ECR scanning for container
service, Lambda scanning. So these are three items over here, which
is part of this, and I am running on a trial version at this
moment for 14 days, sorry, 15 days over here. So you can also click on pricing here and learn more and
you can actually see that. So I guess once my
instance is started, it would start scanning. So it has already found three instances over
here, zero out of three. So maybe after a while, I'll try to show you if I'm
just doing another video, I'll try to include
this as well. So just see if
something comes up over here as vulnerability. All right, so I don't see
anything at this moment. Well, anyways, thank you so much for your time and patience. I will see you on
the next video, and I do remember
I keep this open for you guys so that if
something comes up over here, my instant started running, so it may take a bit of time. So let's see if
something comes up. Thanks. Thanks again, guys.
18. Labs AWS Security Hub: Hey, guys, welcome back
to the next video. In this video,
we're going to talk about AWS security hub. Let's first talk about what's the use of AWS security hub? Security hub provides a comprehensive view
on security alerts, compliance check
across AWS accounts. It aggregates organize
and prioritize security findings from
various AWS services and third party tools. Now, as you can see over
here, an organization. I know that we have not
spoken about organization. Organization is a
service in AWS. Let me show you that quickly. So when you type organization, you can see database
organization over here. You open the
organization over here. So by default, you will
not be having a part of any organization
because you would have created an
individual account. But you can actually
create organization. You will have an
option over there saying that create organization. It's completely free. You can have organization created just like
how I have created. Just create organization
there's not much many questions
asked over there. So you just have to say yes and then create
the organization. Once you've created
the organization, you should be I'm sorry
about that background noise. Um, you should be able
to see your account name over here and you
will be able to see the user information,
the root information. Now, what is an organization? Let's talk about it quickly. An organization is nothing
but when you are in a enterprise services or enterprise company or even a
business oriented company, a big company, you may have multiple organizations
within your company. Okay. Each organization, like Finance will have
their own AWS account. Healthcare will have
their own AWS account. Media. If there's a
section like that, you're going to have
separate AWS account. Okay. Now, they're going to host Dev production and
testing environment for their media
related application, and you're going
to have media dot example bank.com, some
something like that. You can have finance dot uh, you know, example dot Bank. Okay, or as you can have
healthcare dot example dot Bank. So likewise, you can
have your own website. And each of the application will be coming under one
single organization, which is your example Dot Bank. That's the biggest or the high level of your
organization chart, and that's going
to be like that. So when you have this
kind of a feature, when you have this kind of
a feature organization, you need an hub of security in which it collects data from
different organizations, which means that including all services part of
your organization. For example, I am having
um this EC two instance, my EC two instance, I have the services like
AF and other stuff, right? So I have all these services configured in my current
setting over here, which is beyond Cloud AI
this becomes one account. Okay. I can see that. You can add another account
with this organization. By enrolling for the
same organization, you can actually invite people
to this organization and then have another account for that particular branch
of your company. It could be a branch, it could be outlet, it could be anything. So you can add them as
part of the organization. So you will have to was account. But what you will be able to see the billing
for the organization. So the complete
organization will get one L one billing that includes all the accounts within that particular
organization. Now what AW security
hub does it? It collects all the security
related compliance issues. It collects all the view
on the security alerts, and it gives you a comprehensive
view on one dashboard. Okay, which gives you comprehensive view all the
security issues across. So in this way, you
can actually, um, investigate the problem from
one single, you know, view. So for example, you're managing a multi account AWS environment, and you need a
centralized dashboard for viewing security alerts
and compliance data. The AWS security hub
aggregates alert from services like Amazon Guard Dog which we will talk
later inspector, firework manager, which
we will talk later. So it gathers all
this information and it will actually
show it in one place. So what would I want
you to remember on the examination is that when you hear the word security hub, so hear the word security
and hub together, hub is a place where you have it has integral
view of a lot of things. That's what we call it as a hub. So when you hear this word, so you mean to it
needs to remind you of this architecture which says that you're going to
collect information, the security related
information, the compliance
related information, the alerts of security
reports of security. You're going to get
all those things in one place across
your organization. So that is something which you should really think about
when you hear this name. Now, let's talk
about the features now, the centralized dashboard, which actually views alerts of multiple accounts and
regions in one place. Compil and check, it automatically access
the compiling of the industrial standards
of CIS and PCI DSS. It also integrates with other AWS services
like guard duty, uh, inspector, maze, and
other third party tools. So we've only seen maze I
mean, inspector so far. So the videos will be aligned accordingly,
so don't worry about it. So we will be working on that. So the next one is the
key benefits over here. The key benefits are
it is unified view. As I told you, it has
the consolidation of security alerts across services
and across organization. And it also has the reporting feature where it's going to be
looking at compliance. And also it hows that, you know, it is very well compliance is within the AWS environment
industry standards. So these are
something which it's going to look for in
the security hub. Let's just have a view on
the security hub over here. So now let's close
the organization. Let's go for security. Hub, and then you will get the security hub
here as a service, and then just click on that. And you can actually go to this. You have to enable this is a 30 days free trial
for security hub. So just go to Security hub and then your activation begins. So security hub over here kind of takes care
of everything else, like automated security checks, consolidated report from
guard duty Inspector May and additional services and
integrated partner solution. It's going to take it from
all the organization, I'm sorry, all the icons
from the organization, it's going to show you in this. For this, you need to
actually go to Security Hub, and then here you have the AWS configuration,
which we'll talk later. And then we have to enable
the security standards over here where you are expecting that your application
should be um, like AABSFoundation of
security best practices and CIS ArabSFoundation
benchmarks. So you can see that
we have chosen the basic stuff by default, so you can also add in more
items over here to see if your um, items are like, I'm sorry, if your um, organization has the
security standards of creating all the
services and stuff, and then just enable
the security hub here, which is actually
going to show you, those items over here. So here, the pricings are like after 30
days of free trial, so you can be charged for
first one AC or 100,000 um, security checks, you're
going to charge 0.00, um, one $0 per check. So this is pretty cheap
when you compare it, so you can see that it's
going to be like that. Um, so that's the billing of it, and pretty much I wanted
to show you this around and show you the interaction in this particular action item. We're going to see
other security services as well, at a later
point of time. Thank you again
for your patience and time. I'll see
you on the next.
19. Labs AWS Shield: Hey, guys. Welcome back
to the next video. In this video, we are
going to talk about one of the important surveys on AWS
when it comes to security. We are going to talk
about AW shield. Now, what's that shield? We will talk about the
architecture later, but what it is
protecting us from? It's protecting us
from DoS attack, denial of service attack, and then DDos attack, which
you'll see in a moment. So if I'm not sure,
like last month, which is, I guess, August 2024, where
I was like, uh, hearing about a news, saying that there was this, you know, this Trump presidency
and Trump was giving up this speech on
Twitter with Elan Musk. Okay? So this was a big
news for security notes, and as well as, like, a lot of people actually heard
about this news. Uh, I'm not supporting any political parties
or anything like that. I just came up with
this because it was a real live situation
where Trump was giving, um, interview with Elan Musk. And what has happened is
that in that interview. It was supposed to start
at a specific time, but then it started
45 minutes late. The delays the relay of that particular live
supposed to be live, um, you know, uh, Twitter stuff actually
happened, 45 minutes late. Twitter formerly known as
currently known as x, right? X.com, that started
45 minutes late. Now, when enquired about it, we got to know that there
was a Dedos attack. Now, what is a Dios attack? That's what we're
going to talk about. Let's talk about DoS attack. Let's see how DoS attack work. DoS attack works very simply. It's called an arrow of service. So you are going to be
accessing as a hacker. He will be accessing
the website so many times that finally, it's going to give up and say, there is just, you know, I'm not going to
serve you anymore because currently all
my threats are full. So this is what the intention
of a Dido's attack. Dedos attack was very
straightforward. You keep on attacking
the server by accessing the server
so many times that the server runs
out of che. Okay? I eventually runs over to
threats to serve your request. Okay? So what is Dedos attack? It's a distributed way of attacking the same
thing DoS attack. So distributor way,
meaning you use Internet of things,
your laptop computer, desktop, and then
your mobile phones, any form of Internet
phased devices. And then you continuously
hit the service, okay then eventually the
computer is going to give up, even though autoscaling is enabled and all those
things are there. But do remember for
autoscaling to scale up, you should be actually
paying higher for your Amazon web services
or any Cloud provider when you're going to scale so it means that the revenue loss for your system
because you keep on, you know, people
are accessing it. It looks like hundreds and
thousands and millions of people are accessing
your website, but it wouldn't
actually be the case. It would you would be a
victim of an attack. Okay. So now, the process
is very simple. Access the website
through your DNS, go through Route 53, if it's hosted through AWS, and then go to the
load balancer, it could be
application, network, whatever it could be, and
then go to the server. Get that content
from the server. Now server could have web app
and then database service, it could go to messaging services as well and
then bring the data up. So every time it
brings the data, the attacker attacks again
and ask you to get more data. So this way, a
distributed denial of service happens, okay? So this example, what I told you about Twitter or
X is the real thing. And if it happens to
such a big website, I mean, the companies which we work for is not prone for it. If it doesn't happen, it's not because that you have enabled very good security. It doesn't have happened
to you, so far, it means that no one has
actually hacked your website. So that's what it means, okay? So now, how do you protect your website
from such an attack. How do you make sure that you at least standard chance
against such attacks. So let's talk about AW Shield. Now, AW Shield gives you
two offerings over here. The standard shield which is enabled by default
and which is free, and there is this
advanced shield. Now, what do you mean
by Standard shield? Standard Shield provides
you, um, you know, it's a common form of
shield which gives large large organizations
large scale dedos attack. So more or less why it's free. You have to think about that. Amazon wants to protect
its service, okay? So even when we talk about
penetration testing, right? So in penetration testing, you will never get
an approval for Didsatac for testing purpose. Okay? Because it's just not your content which they
are trying to protect. It's about all their customers. So if a site goes down, not just your application
gets affected, but entire application just hosted on the site is affected, then they have to switch to DR, which is, again,
additional cost for them. So do remember that they
they wanted to enable this product as default because they hate
dedos attack, okay? So that's the reason it
is there for free, okay? And there is this
advanced version of this, which is a paid version. And in this version, you will get enhanced
protection, real time attack, visibility, and
financial protection against large Dids related cost. Okay? So these are
something which is provided as part
of the advanced, you know, option, which is
available out there for you. I'll show you that on the
hands on session in some time. But to remember that
this is a protection which comes by default and
you cannot disable this, because no one wants
detosattack on their platform, so that's something
you should remember. Let's talk about some
of its key benefits. Key benefits is
automatic protection. Now, what do you mean by
automatic protection? The standard shield here
is enabled by default, so you don't have to have the option of disabling
it or enabling it. Second one is advanced
threat protection where Shield Advance offers
a detailed report on rapid response during
large scale attack on your system or your website
or your application. Okay, to just have
a look at this, so you can go to WAF and Shield, so you can just
type WAF over here, and you will get BAF and shield. And under the WAF and shield, you will have the AF
as in the first one, and then second one is
going to be Shield. Now, when you get started
with shield as I told you, this is an automatic application
Dios protection layer. And that's, you know, that's enabled by default. Now, what is this price
list? 3,000 a month. That's for advanced
Shield Advanced. So if you want to go
for Shield Advanced, you can actually go for that. But, I wouldn't say that there is anything special
in there unless and until you want to have
some advanced feature like looking at the attacking
report and stuff like that. And, um, until then, you know, there's no point
of getting there. So you can see that
these are nothing but your AWS infrastructures, your AWS regions and
their availability zones, which is under attack. So you can see that
the red indicates that the more attack and you
can see the more attack on the UAE side of things
and you can see the I think that's the
New York side of things. So there you can see that there's a huge
attack and there's a huge amount of activities directed and prevented
by AWS Shield. So you can see that
last two weeks summary, just last two weeks summary, largest packet attack
was 221 MBBS and you can see the largest bite was
transferred was 636 GBBS. And then most common
vector SYN flood. This is basically a type
of attack over here, threat level, normal, number
of attacks is 66,540. So it was able to detect this much of attack which
is coming in, uh, and this is basically
constantly accessing your sites and application
which is hosted on AWS. And it was able to detect these attacks and
stop the attacks. Now, you can see that
in your in my account, this no event because I I mostly use it internally for
open shift and other stuff. But anyways, that's funny. So you can compare
the shield tire. Right now, we have
the standard tire, which is enabled by
default and it's free. You can see the network
flow monitoring, you can see the
standard protection for underlying AW services. But then if you want the layer
server traffic monitoring, we will talk about layer
seven layer point of time. I just give you a quick
introduction to it, and then you have
layer three and four migration,
normally direction. So all those other things with 247 support from
the shield team. With the cost effective where the cost protection from
scaling due to Dido's attacks. This is a beautiful thing. If you think that
your application or your website is going through a frequent attack and
due to which, you know, the scaling out of
scaling happens, you can actually take this
13000 a month and you can actually save this money
where it scales up, and you kind of have this protection saying that you won't be charged for the
scaling up of service. And then AWS five manager, no additional cost,
and then AW, sorry, WAF request for
protected resources with no additional cost. Likewise, you have some
advantages over here going onto the advanced one. But it completely depends
on your situation, it completely is up to your choice and
your customer choice, how you would like to. But it is your responsibility to project all these things, all these advantages
to your customer, saying that um, you know, these are some of the advantages
why you would go for, you know, monthly
payment service. If that's something which the customer requires,
then we will go for it. You can also see the global
thread a dashboard over here, so it basically
tells you every day, and you can see the number
of um frequent attacks, which just keeps increasing. So you can actually check for last three days,
last day, likewise. So it just gives you the
global threat over here, which shows you the complete
the world over stuff. You can also check events which
is related to your stuff. You can also see some
protected resources over here. You don't have anything
as a protective resource, and then you can see
the overview of things. Subscribe to Shield Advance to actually have the
benefits at protects, so this is incomplete
and you can start there. We will all talk about
this later point of time in the professional ones. But here, we're not going to
worry worry much about this. Here we just going to be
understanding you and seeing and gathering experience on how this guys looks like, how each services looks like, and why there's just
so many services. Now, when it comes to shield, it very clearly says that
there is no acronym here, there's no short form here, it says AW shield. And when you hear the
word word AW shield, it means that it's
shielding you from attacks. This is what you
should remember. It's shielding you from attacks. What attacks, Dios attacks. Any kind of consistent
attack on your website, it is going to
shield your gangs. This is what you are supposed to remember when they ask you
questions about shield. So I will give you
a cheat sheet, so don't worry about it, which has 20 seconds of introduction, so just talk about these items, and you can just view that cheat sheet and then
you can go about it. Thank you again
for your patience and time. I'll see you
on the next video.
20. Labs Certificates Overview: Hey, guys, welcome back
for the next video. And this video, we're
just going to see a small architecture in terms of certificate
introduction. Um, I just want to give you the minimalistic
approach towards it, because we're not going
to look at, like, TLS exposure on our
hands on session. We are going to do
it, but then not to extend where we're gonna configure
applications on TLS, because that's not part of
your certification program. But even though I'll try to
do maximum labs on this, I'll show you
maximum items, okay? That's a promise from my end. So here is the
certification introduction. So here, there are these three. So you have to separate these
three as separate, okay? So these are three different
concepts over here. Try to align it and make it
simplified as much as I can. There are three types
of certificates here. So you have demo certificate, self signed certificate,
third party sign certificate. Now, these three types of certificates will act very differently in your
browser, okay? Let me tell you that firstly. So if I browse a demo certificate
or Internet Explorer, Internet Explorer will
just directly tell me that this site is not trustable and demo certificates
are not allowed. You can browse the
demo certificate on Chrome or any other browser, but it will say a big warning. Same thing goes for self
science certificate. Self science certificate is something which you will
get a warning on the edge. Internet Explorer, you
will get warning on the Chrome and every browser, saying that this certificate
seems to be self signed, and do you still
want to proceed? Click down here, and
then you have to click down and then
proceed to the website. So you have to manually consent and proceed to the website. Why? Because it is self signed. So Self science
certificate is nothing, but I am saying that
I'm a good guy. So anyone would trust that because I cannot
say I'm a good guy. If someone says
that I'm good guy, then obviously they
can trust me, right? So if I say that I'm a good
guy, no one can trust. That's the thing about
self signed certificate. Self signed certificate is free. Anyone can create self
signed certificate. Anyone can put it on their website for self
signed certificate. Second one is your
third I'm sorry, the third one, which is your third party
sign certificate. Now, what do you mean by
third party sign certificate? The thing example
I told you, right? Someone is telling
that I am a good guy. So you get someone's stamping
that I'm a good guy, right? So that's what it means. So it's something like
a certificate from AWS. You see the
certificate over here, you see this log sign, right? So when you click
on the log sign, it says connection is secure. Okay? If it's a self
signed certificate, it wouldn't say that, okay? Because there are a couple
three or four things which you'll see to say that
the connection is secure, and it will say that this
is a valid certificate. Okay? It says secure because
it's a valid certificate. If it's not a valid certificate, it wouldn't say the
connection is secure. So you can see that
this site is valid, it has a valid certificate, you should buy a
trusted authority. Now, what do you mean by
trusted authority, right? That's another question
you're going to have. Now, if you see that, this is issued two
and issued by. Sometimes it will be issued
two and issued from also. Okay? So it doesn't matter. It's like from or buy. It's the person who has issued it. Now,
who has issued it? It is the Amazon RSA 204803. Okay? This is the root
certificate detail or it could be intermediate
certificate retail also. So intermediate certificate will go to a root certificate. So there is something
called a chain of certificate.
Now you see that? This is intermediate This is
your personal certificate, intermediate certificate,
root certificate. So it is currently chained. So if I close this,
everything comes under it. So what this means is that
if I go to architecture, you have the second
I come over here. Personal certificate, terminate certificate, root certificate. Now, why it is changed, sorry, why it is chained and what is the use of chained certificate. Now, in terms of self
signed certificate, most of the time you will
see only person certificate. There will be only
one certificate. That is self sign. Okay? Sometimes you will have that person
certificate chained. Okay. And in that situation, still, it is a self
signed certificate. Why? Because it doesn't
trust this root certificate. Now, let's try to understand
what is the concept, why we are building
it like this. Okay? So this is a perfect example.
Why it is like this. Why the root
certificate comprises of intermediate certificate and it comprises of
personal certificate. When you create any
certificate, right, it is created as a self
signed certificate only. Okay. Um, that's
the whole reason. So when you create a root when
you create a certificate, you kind of enter all the details about
the certificate, okay? That's what you
do. I'm so sorry. I just, like, screwing
up with this. So when you create
a certificate, you kind of enter the detail, and what you will get as
output over here would be, um sorry, just a second. Um, yeah. It would be I'm sorry, it could be a JKS file, key file, KDB file, Betwll file. So likewise, there's so
many extensions out there. So likewise, you will
create this kind of file. Then what you do is, you go to a certifying authority,
like CA, okay? And you sign that
certificate by the CA, okay? So when you bind it with the CA, then you're going
to get this format. You CA will have a root certificate and
intermediate certificate. So he will bundle this
certificate into one single file, which are all three
certificates, and you put it on the server. Okay? Now, you're going to
get a private key over here. You're going to
get a private key. You should never
share this to anyone. That private key should not be shared even with a CA also. Do remember that private key
should be kept with that. So this particular file
which you will get right as output after creating the certificate, that
is a private key. So what we do is, we use the private key and we will generate
something called CSR. Okay. Um, so there's multiple
other formats there, but let's just stick
with CSR right now. CSR is called certificate
signing request. So you will send this
CSR file to the CA, not the private key itself. If you send private
key to the CA, then they will not issue it. They will say, recreate the certificate and
send me the CSR, because no one sends
a private key, okay? So when you recreate
the certificate, another random generator
number comes it. So that's what
called the key size. Now, if you see that, in this website itself,
it says 2048. So this is the size of the key. So normally, you will find the key size in this
detail only. Um. Somewhere here, you
will find the size of the key, yeah, here, 2048. So that's basically the
size of your key file size. Keyf size talks
about encryption. This is the random
generator number. And using this random
generator number only, you will see that this
is public certificate, so you cannot do
anything with it. So that's where
another terminology comes public certificate, a private certificate, right? A public certificate is like I would normally
say as an example, you have two keys to
your house, okay? One key is basically
for locking, another key is for locking. So when a thief sees you on the road and
he knows your house, so he says, give me a house key. Which key would you give? So take a second,
think about it. If you give the private key, that is for unlocking, okay? Private key unlocking key. And public key is
locking key, okay? So you can lock any
amount of content or any amount of locks
using the locking key. But the only key which can
unlock is unlocking key, but he doesn't
know that you have this locking key
and unlocking key. So you will give
him locking key. So all you can do with the
locking key is lock stuff. But who can unlock it, the private key which is
sitting at your server. So the stuff, the locking
key locks, right, can only be unlocked
by the unlocking key because the unlock key only
knows about the logic, how the lockey is made, because who is
making the lockey? The unlock key is making it. So through this, you'll be actually encrypting everything
from the customer side. And what happens once the
data gets transferred, data gets transferred in encrypted format that you
cannot actually read. It doesn't make sense,
even if you read it. Okay? So once the
data is received, unlocking process starts where the private key will
unlock the stuff. Thank you guys for the
time and patience. I'll see you on
the next video now that you understood
the certificate, so we will go ahead and look
at the certificate manager.
21. Labs Rest Data Encryption: Hi, guys. Welcome back
to the next video. In this video, we
are going to talk about the rest data encryption. Now, where we are
going to implement the rest data encryption
now that we have seen how to implement
encryption in terms of your HTDPS encryption
for the transit of data. Are we going to talk
about the rest method, like how we can encrypt
this data over here. So we have the three
different types of storage over here, ST EBS and RDS. So we're going to see how to encrypt it when we create them. Let's go to our console and
let's open all these things. I've kept it open for you
guys because my Internet, as you know, it's a
bit slow. All right. Now I'm on three and when I click on
Create Bucket over here, one of the options which is going to ask me is
the bucket name. You put it used for, and then the setting of choosing your bucket prefix and then some options
about whether you want to have this
access to public and then bucket versioning should be enable for your bucket
and then the encryption. By default, the encryption
side type sorry, encryption with
AWS managed keys. So you can also select the server side encryption
with AWS Manage key, and you can actually select Clusom Managkey which
you can create on KMS. So you can use the key for
accessing the ST packet. But I would recommend
using the default one. And then here is another
option where you can enable the dual layer server
side encryption for AWS scheme
management service. Now, dual level key incurs
extra cost over here, you can see that the pricing
tab can tell you about that. So we'll go with
the default one, which is server side encryption, and the bucket key is
going to be enabled where you can access the bucket
key to access USE stuff. Now this is how
you need to create encryption in terms of S three and click
on Create Bucket. I'm not given any details. I'm not planning to create
any bucket at this moment, so I'm just going to skip that part and I'm going
to proceed with volumes. Now volumes is EBS,
Elastic block storage. Now what is Elastic
Box storage is a default storage which gets created with every
EC two instance. You see three volumes over here. You will see three
instances over here. So each of these
instances will have their volumes hosted on
by default on this one. So I'm copying the
instance ID over here. I'm going to volumes, and I'm just like selecting that and I will see the
instance ID over here. You can see the
attached resource. So this is the one.
So you will see that instance over there is attached to this
particular volume. So IBS drives are kept apart
from, you know, instances. So it gets attached when you start the
instance and detached when you are not like
your instances shut down. So the maintenance of this
EBS volume is charged, okay? So if you're not
using your instance, very well, you're not
going to be charged for the instance which
you're not using. But you still going
to be charged for the volumes which you are operating or
storing on a system. But remember, EBS is less
expensive than S three itself, because S three is a service, it's a software as a service, so that's why it is little expensive and volumes cannot be used unless it is attached to own instance or if it's attached to
any other instance. So that's kind of, cannot be used from Internet or you cannot do anything with volumes. That's another difference
if you're just wondering why ST costs more
and EBS cost less. Now, the existing disc, you can actually click
on that and you can actually see the details
of the existing disk. You can also see the
encryption here. So it says not encrypted, which means that anyone
can access the data by mounting it to any instance and they can view the
data which is in that. That's not a good
thing, isn't it? So just go volumes and
then create a volume. And then when you
create a volume, you given all the details over here and there's an option
here to encrypt this volume. So you can choose
the default key, AWS EBS key, and then you can
actually create a volume. In this way, the volume will be created and it will be encrypted as well
using a key file. Now, if you want to
manage all these keys, you can actually go to
KMS to manage this key. But these are default keys which is normally created
not by yourself, but by AWS to have the
default things going. Now, that's four
volume. In terms of database or RDS, right? So when you're creating
an RDS instance, it's a managed instance. I just concrete database. It just takes a bit of time,
so I've had that open. Under the create database, you have the standard
create and then you select the database and you do some
modifications over here, and then you select the
encryption key over here. Here is the encryption, and, um, secret
manager over here. I'm not sure if this credential. This is not the encryption
we were talking about. This is about how
are you going to encrypt the database
with what key? That's talking about that. If you want the actual
encryption of data, you need to come down, go to
additional configuration. And then on the bottom, you're going to see
encryption over here. By default, it is enabling the encryption
over here, you see that, then it is using the default
AWS RDS encryption KMS key. Then you can see the
account ID and the key, and then you just
click on Create. It's going to create RDS
with encryption on it. So this is the few things that you should understand
before proceeding with this. And this may not be
in detail like this. They will ask you an exam, but they may ask you the Oo, I'm just giving you a
hands on because you should know this because
it's going to be very useful when you go to the next certification or if you're really going
to work on AWS. So it comes with all kinds of hands on I want
to teach you with. Thank you again for
your patience and time, I will see you on next video.
22. Labs Transist Data Encryption (ACM): Hey guys welcome back
from the next video. In this video, we're going
to see some hands on towards your
certificate manager. A certificate
manager is used for encrypting your data,
which is in transit. So you're going to change all your communication to HTTPS, both while connecting to your S three as well
as to your RDS. While connecting to S
three or RDS or ESP, you'll be actually using
a channel either called a ELB elastic load balancer or you'll be using
CloudFront to do that. So now, let's go
ahead and configure this as part of this,
you know, demo session. What I'm going to
do is I'm going to access the certificate manager. So that's a different
service which is out there. So middle click that, and you can see the certificate
manager currently has no certificates in your
account. So pretty much that. And here you have the option of creating
a private certificate. Now, you can use this option of creating a
private certificate wherein which you can actually create a private
certificate over here. If you just click on it, you can actually select what kind of a private
certificate you need. Now, what is a
private certificate? A private certificate is
nothing but this one, the root certificate
which is over here. So you can create
your certifying is already yourself and
you can actually name your organization and all those details about
a common name for your, you know, website
and everything else. And then you can select
the size of it and click on I acknowledge that
and then I create CA. So once your CA is created, then you can actually go back to the certificate manager and you can request a
certificate over here. So there will be
option over here, which is disabled right now, but you can actually select that and create a
private certificate. Now, no private CA
available for issuance, which means that
currently there is no private certificate
available, so that's why this
option is disabled. Once you have done
that, you can follow this process and you can
create a certificate. Now, pretty much creating a public certificate has the same process as creating
a private certificate, but there will be
one option extra. That one option is
all about selecting your privacy which you have
just created over here. That's the option extra. But actually, the
same process can be witnessed over here on the
public certificate as well. Public certificate is requesting a public SSL TLS
certificate by MOI Zone. Bit four public
certificates are trusted by browsers and operating system. Now, if you are planning to create applications which is
browsed over the Internet, it's always recommended to
create a public certificate. But if you are planning to
create a certificate within yourself from your application for browsing within inside, your AWS enrollment, you can go for a private
certificate. So I would recommend going for a public certificate
at this case. One of the important steps
which we need to do over here is giving the fully
qualified domain name. Now, what's the problem with giving a fully
qualified domain name? You need to have a domain
name register in your name. Let's just say, for example, you want to register
a domain name, you want to see
how much the price is before you go
ahead and register. So the domain registration
is all done in Route 53. Select Route 53 over here. And you can just modal
click on the Route 53. I have opened the row
53 already over here. When you are in the row 53, by default, you'll be redirected
to registered domains. Now you are finding
that there is not much register Domins
available right now. I have my website
called beyondcloud.ai, and that is registered
with another provider, you know, which is
not AWS, of course. Now, you will see
that currently, you have the options over here. Either you can transfer N
signal or multiple domain from another domain service provider or you can actually register
the domain over here. Now, my website which
I've registered on is beyond cloud.ai. When you search for this website and see if it's available, so the TLD is not supported. What do you mean
by that? It means that AI is not supported by obs. So you can see that
the pricing of the domain varies on the
top level domain TLD. That's the TLD meaning,
such as.com.org. Now, what is available
is.mobi.net dottv.com dot M. But beyond Cloud is
not there on.com. You have Blow Cloud
good beyoncloud.com. Or something like that,
but not beyond clock. Okay? So now the
price is not much, as you can see over
here, just 14 USD. Now you can select this
and you can actually go for the payment and
proceed to check out. So this is, I think it is for
one year validity, right? So we proceed to check out. So auto renew of this domain. So this is valid
for a year, yes. But you can also change
it to ten years also, um, so that's completely up to you how you want to do that. Okay. So now, once that is done, you can click on
next and then you can do the payment options and then you can put your contact information, review and submit. So this way, you can register domain and have that domain
registered in your name. Once this domain is registered, you can copy the domain name and come in over here on
the public certificate. Now here is where you're
going to give domain name. Now, you can also add
another name over here with a chronical name over here
called pub blocloud.com. So this certificate
will be used not just with this one, but
also with this one. You also add another
one and then put stardt Sorry. I'm sorry again. Star.blowcloud.com. Now, what happens if
you do like that? Very simple. Um, you're
gonna get SN names. SN names is nothing but
subject alternative names. What do you mean by
subject alternative names? Previously, when you want to add another domain name to
existing website, for example, mail.belowcloud.com or chat.belowcloud.com or
news.belowcloud.com. We used to create certificates, different certificates for each domain name,
but not anymore. So what we call is a SAN name started coming into existence
and became so popular. And what we normally do is we will put a SAN
Name in which you will have multiple
other ways of browsing your um, domain name. So when I have this
start beloclod.com, I can actually remove this
because I don't need this because star.belowcloud.com
CoverW as well. So um, you'll see
that Dens validation here through, um,
dens validation. So what you will be doing
is using this option, you'll be authorizing that this particular
domain is yours and what you'll be doing
to authorize it is by entering a Cname
on your A record. So whatever the A record
you have on the DNS, you'll be adding a Cname on it. Now, for doing that, you need to have
registered mail address. I am with another provider and I can go and
ask that provider, and I have the login
interface where I can add the Cename and
I can authorize it. So that's the way
to authorize it. Another way of authorizing
is, uh, email validity. If you don't have permission or cannot obtain permission
to modify the DNS name, you can use the email
and you can actually give the domain name validity. I mean, valid domain optional,
valid domain optional. So this is basically the
domain name over here, and you can use email
validity over here. So here, if you go, you need to update the
Cname Here if you go, ma email validation is required. So either one of the option we cannot do because we don't
own that domain name. And then here is
the key algorithm. So stick with 2048. Just click on request. Now, this has created a
certificate for us, public certificate, but
it is pending validation. Why it is pending validation?
Because you need to. Um, you need to
update the C name. Okay? So you need
to go into your um, service provider or if
not, your Route 53. So um, here itself, there's option over here,
create route on this one. So basically, it will
create the DNS record on Route 53 if you have
registered that Bmin, with Amazon. I will
come over here. Select the domain,
create records. Automatically, it will create the records without
your intervention, so you don't really
have to do anything. But if you are registered
with some other vendor, you need to go there, update
the Came, go to your record, check the C name and
update the C name, match the domain name
according to that, and that will actually come over here as a validation.
Okay? So you can see that. Sname then Snam value. So this should be a sname
and the value of the sname. So this way, um, you kind of authenticate to Amazon that this
domain belongs to you, and what Amazon is going to, you know, uh, create the certificate for
is for your domain. Okay? And once you've done that, once you have completed it, Um, then you have to implement this
somewhere else, right? You have to implement this certificate that's
a L part somewhere. So let's go to the
load balancer. So that's the place where
we're going to do that. So just type Load Balancer. It's a feature, not a service. So it's a feature under ZT. So you can just open
this feature directly. So it's going to
be under EC two. And under that, it's going to come in load
balancing over here. Now, create your load balancer. Application load balancer is
the far more simpler one. There's even simpler one
over hiding over here, or classic load balancer, but we don't use that so much because it's previous
generation and, you know, it's pretty old. So application load
balancer works perfect. And network load
Bancer is more niche, but then it's for ultra process, you know, ultra high
performance stuff. Suplicon application
load balancer. You can also see how the
application load balancer works. We'll talk about application of balance a later point of time. But here, just here
for below Cloud. I'm just going to
copy the name of it below Cloud is
the name of this. I want it to be Internet facing because we have created
our public certificate, and here is the PC over
here, it's the VPC. You have to select at least two availability
zone sits you can see and one subnet per zone. So by selecting AZ only, you will actually get
a subnet per zone. Why? Because you need to have
a load balancing capability on your load balancer so that you will have
to select two AZs. Okay. Now in the security group, we're going to go
with the default SG not going to modify. In terms of the listening or routing where
it needs to route. You need to select the
target group over here. We don't have it because
we're not created anything, but we're here for SSL, so don't worry about this,
where it's going to target. Change this from STDP to HTTPS. Now, HTTPS is the method
of communication for us, so we're going to use HTTPS over here with port number 443. And when you select
this, there's going to be a new
option over here. So you have the
security policy for SSL and then you have the
SSL policy over here. So this SSL policy talks about where you're going to
get the certificate from ACM which is your AWS
certificate manager, ACM, okay? Or from IAM. You can also
have load certificate in IAM, identity access management,
input certificate. So if you want to upload
the BEM configuration, COVID based, or um, you know, you have the certificate
body or you can change the certificate
certificate chains you can put it across as well. So I'm going to load
it from ACM. Okay? So ACM will have your
certificate here. So when you refresh this, you will see the certificate here. Once you have successfully, you know, validated
your certificate, then that certificate validated certificate would
come over here, select that and then
click on creat Bedensa. This way, what happens is your load brands will
be created using SSL, using HTTPS and it will use the communication route
using the certificate, the public certificate
check graded. Now when you create
public certificate, then you must ask me, where
is my private certificate? Actually, what you have graded
is a private certificate. From the private certificate, you get the public certificate exported and sent
across over here, and that's only the
private certificate would be in the server side, what you will see
here or what is sent across to the browser is
all the public certificate. Okay. So, we don't call it private certificate
needs to be created, but we call public certificates emphasizing on the word public because it will be having
access to the public network. But what we don't emphasize or what happens
in the background is like, there will be a private
certificate created, and that will be used
and kept on the server. For decryption. Okay? So that's the reason why they say
certificate over here. They didn't say public
certificate over here, select the certificate
which you have created. But when you create it, we call it the
public certificate. But, you know, here,
public survey. But what they're trying
to emphasize is that, it's going to be
connected to public, but this is a certificate
which you're creating. Okay? So that's what they're emphasizing
and nothing else. Alright, then you have
seen the hands on, and I don't have anything
else to show you as today. I hope you had fun in
this hands on session. If you have any questions,
please do leave it on the question section.
I'll be happy to help you. I'm just going to delete
what I've created right now. I go to give delete
option over here. Click on Delit and that
should have been disappeared. Perfect. Thank you again. I'll see you on the next video.
23. Labs AWS Audit Manager: Hey, guys. Welcome back
to the next video. In this video, we are going
to see about audit manager. So for you to get
an idea about this, Audit Manager, AW Audit Manager. Now middle click on
the audit manager. Now you see the audit
manager opened over here. Now it is for continuously
audit your AWS usage to simplify how you access
risk and compliance. Now, do remember that this is a feature which
you need to enable. The first 35,000 assessments
are completely free, which means that you can go
ahead and click on create a new assessment or
set up AWS a manager. Now, you can set up IAM service linked
permission or role for this. You can skip that right now. You can also dedicate
this access to else, some other ID because these are options as part of your
organization and it's popping up. If you have not
creator organization, you wouldn't see this. Then you can see
that AWS Config, so you can allow the audit manager to collect
data from your AWS config. So you can allow audit
manager to collect AWS config information to generate evidence for
your config rules. Now, this is in terms of, if you have a custom requirement
for your company that, the audit will occur, and the audit will capture all
these custom requirements, which is using AWS manager, which is captured
using ATOs manager. I'm sorry, AW Config. Then it can be captured
by AWS audit manager. And then you can actually
go over there and then make this part of the
AWS audit manager. You can also collect
data from security Hub, security hub we
have seen already. It collects data or
security information from different sources of your
accounts so that you will have access to one console
which actually has all security related
information in there so you can also collect
data from there as well. Now, when you click
on Complete setup, you can see that the audit
manager has set it up. Now, the basic function of
audit manager is to gather reports from
different frameworks and uh and you know, items, which is something
which services, sorry not items, services
which we spoke earlier, and it's going to download these assessment and
you can actually go to the dashboard for each of these item and then you can actually verify what
has been done so far. Now, you can also create
assessment over here and you can just
specify the name and where the destination of the assessment report
is going to be so that you can keep those
assessment for you. Now you can select what are the standardized
assessment which is available and you
can actually select that and then you can
create a assessment. These are default templates
of these assessment for international organization
of standardization. Likewise, you have
assessments in which it basically goes and captures all
this information. So most of it is
AWS information, saying that your application
is ISO standard, and while grading
this application, you can also put it on a history bucket which you can export at later point of time
to anyone you would like. Thank you again for
your time and patience. This is how the audit
manager works on hands on
24. Labs AWS Cloud Watch: Hey, guys, welcome back
to the next video. In this video, we
are going to talk about Amazon Cloud
watch hands on. So let's go into the EC two instance
which you just created. So now if you go over here, you're going to see
the running instances. Now when you click
on it, it's going to show you with the
filter of running. So instance state
equal to running. So this means that
you'll only get to see those instances which
is in running state. So you will get this. So if you want to see
all the instances, just close this or
clear the filter. Both will help you to show you all the remaining
instances which you have in your region. So now, what is that
we are going to do? We are going to
look at CloudWatch. For you to look at Cloud watch, BT fault Cloud watch is part
of your um EC two instance. So just click on that and the EC two instance will be
highlighted and selected. Under that, you will
see details over here. You can expand this and you can actually see
the instance summary. We'll talk about this in
detail later point of time, but we will check one
of its feature which is related to Cloud watch,
which is monitoring. As the name Cloud watch
implies the cloud which is watching
watching your instances, your applications, all those things which you're
doing as part of your Amazon web
services, it's watching it. That's basically a
monitoring system, so you're going to find it
under monitoring over here. What is that it's watching? Cloud watch agent
is agent metrics. It's actually
collecting the data. Now, you can actually
see more details over here by clicking on Managed Detail
Monitoring and you can enable detailed
monitoring over here. After you enable the detail
monitoring, for instance, monitoring data will be
available in 1 minute period. This is the details
it is talking about. Currently, if you see this
information over here, so it is received
every 5 minutes. But if you want to
have 1 minute details, you can enable this. But additional charges applies. Do remember before
making a change. So when you click on
that, you will go to CloudWatch and you will
see the price calculator, you can go to Price
calculator app. Or you can just clearly
see this from here. Here it is talking about
pay tire in terms of logs. So if you are collecting
the data in just, you're going to
get a standard of 0.5 per GB for monitoring. In terms of infrequent access, 0.2 infrequent access is not
frequently accessing it. Likewise, you have the
charges over here. So let's not worry about that at this moment because that's not part of
your certification, nor it's needed to look
closely like every 1 minute. Let's go with the default
one with every 5 minutes, and you can actually
see the CP utilization, network utilization in
terms of input, output, patches in sorry,
packets in packets out and you can see the CPU
credit usage over here, which talks about
the credit usage of the CPU and CPU credit balance. This will actually give you complete
information about it. If you want some more
information about this, you can actually go
to CloudWatch itself. Now over on the search
type Cloud watch, and you will see a
service over here, just middle click that and
it'll open on a new tab. Now, you can see that you
have empty dashboard. There is no dashboard over here. So what you can do is you can either go for automatic
dashboard about your EC two instance load this EC two dashboard over here. Now, what it's going
to show you is the existing all
your EC two services which you have over here,
whatever is running. So basically, it shows you two services are running
for me in the EC two, so it shows those graphics. The other ones are shut down, so it's just not showing
you as part of this. So you kind of get
more information. So previously in
the EC two level, we were able to find only
the CPU information, network information, and
the CPU credit information. But here, if you see that, you get average utilization,
disk read information, disc write information,
network information, and then you can
see the check in some information over here. Now, these are some of the
items which you will see. And you can also customize
this dashboard by, you know, live data, and then you can actually add this as part
of your default dashboard. You can also select some other dashboard also
about S three function also. I have a few S three buckets. So basically, I just switch
to S three over here. So, likewise, you can create or use the automatic dashboard. You can also input the
dashboard like this. Here you are seeing
this data, right? So you can also add to
dashboard over here, which actually goes
to dashboard over here and it adds that
dashboard in your Cloud watch. Now, adding a dashboard, meaning that you will have a quick view on those resources which
is important to you. You can actually configure a dashboard by going over here, clicking on Dashboard
and create a dashboard. Now while creating a dashboard, it needs a name of your
dashboard, which I'm giving test. And then you can input
each item in here. So you can actually
create a datasocia which is using prometes
or S three bucket, or you can use CloudWatch
itself for you to do that. You can log in
metrics, logs, alarms. So if you go by this, you can add one by one
parameter over here and you can select what
items you want to add as part of this dashboard. So I'm selecting EC two. There are 317 metrics coming
in from EC two instance. So if you click on that,
you will see more of this is pre instance
metric over here with just 256 across all
instance you have. Okay? So if I say about pre instance metric,
let's look at it. Now, you see that there
are so many instances name over here with the
instance ID, right? So there's no specific
names over here, and you can see that
there is input packet. So what is important for you, you can select that instance ID. I'm copying the instance ID. I'm over here, I'm searching
for this instance name. So only those instance related
items coming over here. Now, selecting that
instance name, and I'm creating a wizard. Now you can see that pretty much all these items are coming
into one wizard over here, which may be a bit
confusing because it has a lot of items
inside one item. So you can also create
another dashboard. So you can select one
item as separately. So by clicking on the next
and doing the same thing over here and you can
select that instance name. And then you can select rather than selecting all the item, you can select one item.
And create a wizard. Now here, it only
talks about CPU, disk not CPU, disc
wide bytes over here. Likewise, you can create for each one and create
your own dashboard. So when you come to Dashboard, you will see the dashboard name. When you click on the dashboard, you will actually see
the performance of that particular, um,
you know, instance. So you can delete the
dashboard over here. And the dashboard is removed. So this is pretty much how
to create a dashboard. Here is the alarm. So you
can also create alarum. So right now there is no alarum. I don't see any
alarms over here. There's also building alarum over here, which sends building. Now that you don't
have any alarum here, you can create Larum,
create an alarm. This is not actually part
of your certification. I'm just showing you because
I have experience on this. So you can set this EC two instance search
by this server name, and you can actually take
in the CPU utilization. So CPUtilization is over here. Set matrix. Now here you can set the alarm for this
particular utilization. Alarum graph here over here will be very much a live graph. It will actually show you the dynamic view on
what you are setting. Here, you're saying that if
I want this alarum to be triggered when the
CP utilization is greater than I can
see that over here. So you can give
in the percentage over here, 80 percentage. So you can see that it
is set to 80 percentage. Right now, it's
very much I mean, it's low utilization
at this moment. So if you just say
20 percentage, you can actually see
some difference, not much. It's 0.1. So, likewise, you can set an alarum and you
can actually see the alarum in this
particular graph itself, and then click on next. And what do you want to do
when this alarm triggers, you can create SNS
topic over here, and this is SNS topic is nothing but simple
notification service, and this will trigger. So I have done it for another application,
so you can ignore it. And then you can also
trigger Lambda function. You can also trigger
auto scaling. You can select the
autoscaling group over here. You can also select a EC
two action as could be a restart of the server or shalwn of a server
or something like that. You can do that. Click
on the next icon, and then you'll be able to, you know, create this but it
needs SNS topic over here. So here, add name and description and
preview and create alarm. So that's pretty much straightforward and
creating alarum. So these are the two things we want to cover on this video. Thank you again for watching. So now you know how to monitor your application
with CloudWatch. There are other things
which you cover later point of time, so
don't worry about it.
25. Labs AWS CloudTrail: Hey, guys, welcome back
to the next video. In this video, we are going
to look at Cloud trial. Now, for Cloud trial,
we may need to go in over here and then type in
the service name as Trial. So this should, okay.
It's not working. Cloud trial. Yeah, there you go. So middle click that and you
should open as a new window. And within this window, you should see the Cloud
trial over here. Perfect. So now, do remember that the Cloud trial has
a pricing on it. So you can actually
create your own trial. So in the previous video
in the architecture, we found out that you
can actually create your own Cloud trial and then
look at a specific service, and then you can track
that specific service. Alternatively, for this
demo purpose, right? So you have something
called event history. So what we are in right now is the Cloud trial
introduction page where you can create
your own Cloud trial. So that's equivalent to going to Dashboard, create a trial. So while creating a trial, you'll be actually
creating a trial name, enable for all my organization, which means that
you can actually enable for all the accounts
in your organization, create a new bucket, ST bucket. This ST bucket will hold the log of your Cloud
trial, the audit logs, and then log this, encryption, enabling
encryption on this log. And then here is creating a new KMS key for
encryption over here. And then here we are enabling CloudWatch
as part of the log. Cloud Watch will actually
look at the logs over here, but that's optional and charges apply because it will
be reading the logs. Then when you go to
the next diagram, you will choose the log events, which is very important.
I'll show you that. So test that's a test. I just want to get
to the next page. So here, what are
we looking for? Okay? What audit logs
are we looking for? You want to capture
the management stream. Like for example,
management stream as in operation performed on AWS
resources like creating, um, or modifying terms of data events like transferring of data within a
resource and stuff. Inside events as
in the activities, errors and behaviors
of your services. All those things will be really, a lot of data. Do remember that. Um once you capture
all this data, what do you want the
management EM event to do? It shows operation performance
like read and write, exclude KMS event, exclude
RDS, data, API events. These are something which can actually trigger a lot of logs. So you can exclude
them and also you can ignore read because you don't
want anyone to log events, which is a read activity. You want to log only
the right activity. Someone is writing something towards the management changes. That could be very important. So you can also enable this because no
additional charges for log management events on this trial because this is your first copy of
management event. So even though there
is no charges, but still you'll be charged
for the next event. So it's always recommended to go with whatever the current
organization requirement is. And then if you have any data
needs, like, for example, where you're capturing it from, you can put in the application
name, the service name. And then you can create
the inside events okay. This will be actually selecting the unusual activity based on the sound days baseline in respect to API call
rates and error rates. In this way, you
can actually create your own event on
your Cloud trial. Now, this is cost incurring
because it's going to capture a lot of data and it's
going to you know, your Cloud watch is going to monitor this log as well if
you choose that feature in which CloudWatch
will analyze it and you can have a graphical
representation in Cloud Watch. But do remember it is not part of your program
or your certification. So they're not going to ask you what are the options here, while creating it,
all those things. They want you to know
what is a Cloud watch. And Cloud trial. So don't worry about all these in detail, okay? So when you go to
event history, now, this is something free which comes by default
event history is. Now, if you remember that the
last training what we did, we created a dashboard, we deleted the dashboard,
if you remember, we were looking at Cloud watch and we created a dashboard. I can see that the same thing. So we deleted a dashboard. We started, you know,
items over here. There was some kind of a
background activity went on. We put dashboard over here, we delete dashboard,
so we have done that. And then previously, we created
a EC two instance, right? So I did a console log
in, stop instance. I did that, and then we did aminate I did a terminate
instance as well. And then um, yeah. So likewise, we did a lot
of activities over here. For example, I don't
know about these things. Maybe I would have
done sometime before, because it is all 2 hours ago. So I would have done it
like a little while ago. But let's do an
activity right now. Let's prove that this works. Here, let's go to S three. Scalable storage in Cloud. So here, I'm going to
create a bucket over here. So test hyphen bucket, hyphen my account number. See, it has to be unique, so I don't have a choice. So I have to give
my account number everywhere so that it is unique. And then, obviously,
no one would have used my account number because my account
IU would be unique. So it's basically ACL, so I'm going to go
with the default stuff over here and create a bucket. Now this is going to
create this bucket over here without any kind of exception or error because
this name is unique enough. Now, when you go over
here to your um, even history and do
a refresh over here, you should see put
bucket encryption. So you see that there's
activity over here. When you click on the activity, you can actually see the
overview of this activity. So the usenym was root and, you know, the access key and everything else,
which is required. And then you can also see that S three put
bucket encryption, and you can actually see
the details over here. So this is basically tells
you that there was a event, and that was about
creating a bucket. So, likewise, if I
delete this bucket, so that's also going to
get logged over there. So I'm just going to copy paste this much easier than typing it. So now you will see
refresh this and you can see may take a while. Then let me pause
on pause the video. There you go. So you
have the create bucket, Wood bucket encryption,
delete bucket. So all three items have come in. I don't know why this
one didn't come earlier, but when I just, you know, refresh this entire
page, it came in. So Delit Bucket over here
gives you all the, you know, information about it, which user deleted it and
what's the event ID for it. So likewise, it has given us
all the information here. So this is pretty much the
usage of the Cloud trial. Cloud trial enables you to look at all your
audit information across AWS to see whatever the activities you're
doing as part of the, you know, AWS services, the data related itself. By default, it looks at it. But do remember that
it doesn't hold so much of data because this is
the default service in here. So it will hold like
data of about 90 days. But beyond that, if you need
the data to be holded, um, then you may have to go
for some kind of, uh, your own dashboard in
which the data would be actually stored in
your S three logs, and that will actually hold
the data on the S three logs. So do remember that that's
the major difference between the event history which comes by default
and the dashboard, Cloud trial inside
which you are creating. So Um, so this is
something which you should remember when you're working
with AWS Cloud Trial. Thank you again for watching this video. I'll see
you on the next one.
26. Labs AWS Config: Hey, guys. Welcome back
to the next video. In this video, we are
going to talk about the hands on session
of AWS Config. Now, do you remember that
AWS Config on the theory, we said about monitoring your existing infrastructure
with some rules. So um, we cannot do that. So let's go ahead and
search for AWS Config. Now this tracks resource
inventories and changes. That's the use of config. Over here, you have
the dashboard. Here is the config
metrics over here, which is tracking the records. So here is the
compliance status. So here it says zero non
compliance rules and zero non compliance resources because I don't think
we have anything. In terms of resource inventory, we have 110 resources over here. These are some of the
resources which we have. Okay, so now the confirmation
pack is basically, you know, kind of a
template which you can actually use uh for
best practices. For example, best practices
of S three, Amazon S three. You can actually use this pack, and then you can
name this pack and then you can add a
parameter over here. Like multiple parameters. What is that you're
searching for? And then you can basically
deploy this pack. So this is basically like
if you have a requirement, on a series of requirement
which needs to be enabled, so you can actually deploy
a confirmation pack. Or else you can go for rules. Rules is much simpler
and it's very much useful for our
demo because we are not going to be asking they're not going
to be asking questions about how you configure
this and what is the different ways of
configuring AWS config. And that's not the question
they're going to ask. They're going to ask about
the AWS configure itself. So when you click on Rule, you have the AWS managed rule, custom Lambda rule, and
then custom rule by Guard. Okay? So likewise, so let's
go with the AWS managed rule and then type S
three version no. Just don't call me worshiening. I have so many, you know,
result. Alright, versioning. When I type versioning, you see that I get
one option over here, S three bucket
versioning enabled. So now we just open these
two buckets separately. I just did a middle
click on that. And I opened the properties of this particular cloud
trial logs bucket. And you can see that bucket
versioning is suspended. And for this one,
as well suspended. So Let's go and create
this bucket over here. I'm sorry, this particular
rule over here, which says that it's going to detect if the versioning
is enabled on your bucket. Okay? So click on next. That's your rule. Okay? That's something
you cannot change. Here is the name of
this particular rule. And then you have the
scope of change over here, all changes resources tags. I'm going to go with
resources where you can select it because you want the resource to match
the specific type, which is S three, we want to
match the existing types. We have selected ST over
here as a resource name. So here parameters can
be is MFA D enable, and then you can actually
give the value over here. We're not going to use that, so just going to go with next and then you can actually see this and verify what
this is going to do. And then click on Save
button. Now we click on it. Now when you get inside this
particular rule over here under AWSkNFak and then click on Refresh,
give it a minute. You will see that
the two buckets which is there is non compliant, and that message is
going to come over here. MFA DLT is enabled for
your S three bucket. This is another
parameter over here. We're not actually using this and what we are using right
now is this rule over here. So this is how you
have to give parameter on the confirmation pack. So when you are doing
confirmation pack, you need to identify
the key and then give the string
whether two or falls. So here's MFA disable enabled. So if we go into this, there is an multifactor
authentication here. I think it should
be around security, I guess, permission must be. So here MFA. It's not here. So, likewise, that
needs to be enabled. So that's what it
is looking for. Let's go back and
check this whether this rule has
successfully executed. Okay, so you can see that
both are non compliant. So both these services
are non compliant. If I'm going to create another bucket with
the same thing, it's going to come in as well, so I'm going to create a
test bucket over here. And then I'm disabling
the versioning. So just create So just click on Create now. Now that I've created
this bucket and then I'm going to
AWS Cloud Trial log, and I'm going to the
permission, sorry, properties. Then from there I'm okay, this is the multiflector
authentication. I was just searching on
permission, so it was right here. Click on Edit, and then here I have enabled the
versioning over here. For MFA, you need to modify
the settings elsewhere, so we will talk
about that later. Yeah, I've enabled the
versioning over here. So if I'm just going to
refresh this in few minutes, maybe 60 1 minute
to 1.5 minutes, I think it so you should
see the new one coming in. So check for all. So
sometimes you know what the compliant will not
be shown over here because it was looking only
for non compliant resources. So we have made this
particular one compliant, so we just expecting that to be seen right now anytime now. I'm going to pause
this video and pause it once it shows up. Alright, now you can see that
this particular service, which we have turned it
on the worshiening is compilingt and the rest of them, which is a newly created
one is non compent. Non compliant. Sorry about that. So, um, you can see that this is pretty much the demo
I wanted to show you. Now you can understand
that the AWS config is for understanding and analyzing
your configuration and setting your own rules, which is required for your
own communication with AWS. So thank you again for
your time and patience. I hope that you
understood what is AWS config in terms of handshone. I will see
you on the next video.
27. Labs AWS XRAY: Hey, guys. Welcome back
to the next video. In this video, we are going
to talk about AWS X ray. Now we have already seen the theoretical explanation
for the X ray stuff. Just to give you a quick
view, X ray scans, the internal workings of your application,
your request flow, your wind debugs, improvisation, optimization, performance
insights, and everything else, you can actually see as
part of the AWS inside. So I've created, I'm
sorry, not AW inside. I'm really sorry. It's AWS Xray. So if you search for Xray, so it's a feature which
is a separate service, which is part of another
service over here, um, which is CloudWatch. So previously, Xray used
to be a separate service, but then it became
part of CloudWatch. Now I can see that
trace is over here. Now, by default, you won't have any application
for you to trace. So you can do is there'll be option over here called
create application, create a sample application. Something like this. If
you go to Trace Map, you will see set up
a demo application. It's not charged much
just $1 per day. Complete day I've
been running it just $1, it gets charged. You can just turn it on and then have this
application, turn it off. Now, you can see
that you can see the rate of what it
is tracking over here as this is a simple
application we are tracking, we have the client and
then the score keeper is basically your ECS container, ECS is your Cloud service
Elastic Cloud service, and you can see that there is 100 percentage
error rate because the client cannot access the application because
it's the demo application. You can also see the
traffic over here. When you click on
the arrow over here, you can actually see the response time
distribution filter. Here you can also click on view trace and you
can run a query. This tells you about
what is the speed of the node and HDDP methods is also there as
part of the trace. Likewise, you can actually look at the performance
of your application. You can um, look at the request number of
requests coming in the falls, the seconds it is
handling the latency. Likewise, you can see a complete trace of
your application. If you have a application, you can actually use it via trace and you can actually
trace the application, how fast application is, how frequent is getting updated. Likewise, you can see
all those things. You can also click on
view dashboard over here, this gives you
complete information on the score keeper application, which is part of the trace, which gives you the latency reports, errors, the throttles. All the information, it
actually comes through. You can also check for
a period of 6 hours. I can see that. There's
been a few spikes over here as this is not a live application or
something like that. You're not skiing much detail. But then this is the trace. It's not mandatory or it's
really required for you to know how to configure trace as part
of the certification. The certification only
gives you an idea. What is trace and some of the features is what you
need to remember and what a function of it is very important because questions
would come like, Oh, a customer or client is
looking for a way to, you know, fine
tune or figure out the performance of the application
with its architecture. Which service would be
appropriate to choose. They will ask you
so many services which is so closely
related like CloudTrail, Cloud watch or X ray. You need to understand
and what they're asking. They're looking for
performance inside and they're looking for the architecture and the flow of the application. Xray is the right one. Likewise, they will have this complicated
services attached to your existing question and they would be asking
you in a tricky way. You should understand,
read the question twice and thrice and then
only reply to that question. Thank you again for
your time and patience. I hope it was helpful
for you to understand how to configure Xray. We will talk in detail how to do that later point of time
in different training, but that requires a
lot of AWS knowledge. We will come back
to that. I will give you all the
knowledge I have. Thank you again. I'll
see you on the next one.
28. Labs First EC2 Instance Creation: Hey, guys, welcome back
to the next video. In this video, we are going
to create an instant. So let me close up the
VPC and go to EC two. Now I'm sorting it via running
instance, and I close it. You can see all
those things which I have in the stop
state as well. Now, I normally stop
the instance after, you know, after the purpose
of the instance is completed, because the charges,
which is going on towards the CPU and RAM will also halt. There'll be no charges
for me until I start the instance back again
because that CPU and RAM, which I've been
using will be given to someone else if someone
else requests that. So, um, I will not
be paid for it. If you're thinking
that if I'm just letting out or
renting my CPN ram, which is dedicated for this, no, I wouldn't be paying for it. I wouldn't be paid for it, nor I would be paying for it. So that resource will be
allocated to someone else, or it will be not allocated to anyone else. It'll be idle. So that's the reason
why I'm not paid or I'm paying for anyone. So that's the simple reason. To create instance, just stuff
to go to launch instance, or you can go to dashboard, and from there also, you
can launch instance. But do remember, the dashboard gives you the complete overview
of the current region. So if you have a server running in the Mumbai
region or CO region, so you'll not be able to
see that over here because this only talks about North
Virginia here, USC Pne. And you can just click on
the global view over here, I just middle click on it, and it will give
me the instances running on all
regions over here. So you can see that five
instances in one region. So, likewise, it gives
me all those things. So VPC 18 in 17 region. So that's something I don't have to really worry about because it's a default stuff. But 17 and 17 is a good news. But why 18 and 17? Because I have created
two VPCs in this region. That's why I saying 18 to 17. Likewise, you can actually
see the security grooves and all those things in
a much higher angle. So let's close this. Let's launch an instance. Now, this is going to
be a test instance. So I'm just going to name
it as test and since I'm going to run the
default operating system. I'm not going to even
log into this machine. I mean, through Put or
any MobikSteM or client. Okay? So I'm going
to use this and username is actually shown
over here, easy to use. If I choose red at
operating system, you can actually see the username changes
here to easy to. Let's talk about
U Bundu Bundu has a different username. Let's see. Let's see about that. Okay,
it's taking so much time. Let's change the
operating system to another one, which is like. So it is changing
the volume level, so that's why it is
asking you to confirm. Oh, Band. Yeah, so you can
see the username is ubuntu. So if you are looking
for the username to figure out the username, what it is and stuff like that. So choose the operating
system of your choice. Make sure you are within the
free tire eligible server. So the only one server
is eligible right now. It's this is T two Micro. So that's the one I've selected. Now, keypare is so important. Without keypad not
be able to log in. But in this situation, as we are doing this
for demo purpose, so you don't really have
to worry about just create a new keypad called Test
keypareTest hyphen keypair. I've already created it, so just click create keypare and it will
download a PEM file. Now using the PEM file, you'll be able to log in with this user name which
comes over here. Now, can you log in without a
password? Well, you cannot. Password is disabled on Aula, so you have to use the private key to
login into the server. So that's the only
way of communicating. But do not worry about it, we're not going to log in here, but when the time comes, I will show you how to do
that, so don't worry about it. In terms of network settings, you can actually edit
and change your VPC, change your subnet, auto enable the auto assigned
public IP address. You can enable some
other feature, like Elastic IP address. Again, charge supply.
Remember that. So that's something
which you should, uh adhere to, and then you have allow
SSH traffic from anywhere. So this will give you
permission to browse the SSH to the server
using your PEMkey. So if you disable this, then you cannot
actually connect to SSSch do remember this. But anyways, we're not going to use SSSuch doesn't matter. So AGB, I'm okay with that. So just click on
Launch Instance. Um, yeah, proceed with the
existing one TTS keeper. Sorry, I think when
you create it, it automatically changed there, but as I have not created it because it already
exists, right? So it actually changes go back to instance that is
successfully initialized. So you don't see it right now, just click on refresh. It may come anytime. You can see that it's in the
pending state right now, so this is getting
created right now. W means that your server is getting created
at this moment. So now it is still pending, so let's just refresh it
should be running now. Yeah, there you go. It's running right now you have the
public IP Address. So you can do SSSarC via
this public IP address. If you have done if you're disabled SSSarO if you want to allow some security
policy later, you can click CoTA Security
I'm sorry, C Networking, and then you can
actually change some of the really bad at this Hyundai. So you don't have it
in the networking. You actually go to
networking over here and you can actually see the um, network interface over here, and then there is the service
group is a security group. Okay, so this is
a security group, you can actually change
the security group and allow items over here. It's under security.
Sorry about that. So in the security group, you'll be actually adding
more inbound requests. So right now 22 put number, which is SSSch is
getting added over here. So likewise, you'll
be adding that. You can also access
this by going to security from here and change
security group, right? So that's basically goes to your security group over here. So likewise, you can do that. But we will talk
about in detail. I'm not really interested in talking about
all these things, but then let's go ahead and, uh, um, you know, in this video, this is pretty much
what we wanted to do. We have a test in
sense right now. You may not have these things,
so don't worry about it. This is greater for another
project for my, um, you know, for my ubunitis work and open shift work
and stuff like that. So do not bothered
with these. Okay. So anyway, wise, thank you again for your
patience and time. Sorry about the extra
information if it's, like, giving you a lot of information, so
sorry about that. I see the next video with more, more, more information.
Thank you.
29. Labs VPC Introduction: Hey, guys. Welcome back
to the next video. And this video, we
are going to talk about virtual private Cloud. Now, VPC, we shortly
call it as is a set of service which is
going to run on every region. So if you have a region where you want to plan to
run your EC two instance, you have to firstly select the region
where you want to run, and then you create a EC two
instance on that region. This is not optional parameter. This is a mandatory parameter. Sometimes right now,
we won't be bothering with all this VPC configuration when we create the
EC two instance. But later point of time
in the straining itself, we will be bothered about it because that's some topic which we're going
to cover for sure. Now this is going to
be a heads up for you. Why we are looking
at it right now is because for this example
which we're going to use, we need some EC two instance in our service
which needs to run. That's the reason why we are
looking for or understanding VPC and what is EC two instance
and how we create them. Even though we are not going to access the EC two
instance were created, but still we just want to
look at it as introduction. Now, over here, you have the VPC in the architecture
diagram under a region. So that means that every
region is going to have your VPC configured. So we just type VPC over
here, you get the VPC. So it says isolated
Cloud resource, and that's the category
it's coming under. Okay, so I'm just going
to tag that over there. And then as you
can see over here, you have to um, EVPC in the USD East region. Now, if you want to see all the VPC instances
across all regions, there will be one each. So there will be, of course, uh, you know, one particular
PBC across all regions. But there are something
says that not often. So there are some
regions disabled for me, as you can see over here,
I'm in North Virginia. I have 17 enabled and there
are some disabled for me 13 around 13 regions are disabled because I'm
not using it anymore. So there's no
purpose by default. Uh, Amazon disables them, but we can enable them later point of
time if you need it. But I'm not worried about that. I am happy with North Virginia because that's the
cheapest region. And with the resources you create there is much
cheaper than any other, you know, areas
around this because it has a lot of avalabty zones. Um, well, so that's my secret. So yeah, that's why I
create my service here. Um, so over here, I have two VPCs, so you can have
more than one VPC. So one VPC is default because it enables you to quickly configure EC two instance
without any kind of, you know, I need to rule this configuration prerequisites
and stuff like that. So that's the reason why
every region has a VPC. So it makes me much easier. I can switch to any
region right now, and then I can just spin
up EC two instance, just the default settings, okay, on the networking. So I
don't have to do much. We'll see that on
the next video. That is the reason why there
is one VPC everywhere, okay? Another reason is, like, VPC is really required for the
network configuration. It could be a region to region
communication or it could be Internet phased application communication over the Internet. But you definitely
need to have a VPC. Now, when you enter this, there are two VPCs
over here, right? When you enter one VPC, so you will have
subnets assigned to it. So you can see that
these are subnets. Now, where are these
subnets coming from? Now, subnets are the default
configuration of a VPC. How many zones or available
T zones the VPC has, that many subject I'm sorry, that many subnets should
be available for that VPC. Can you have um, you know, there may
be multiple question. Can you have multiple subnets
can you have multiple VPC? So those questions will be addressed at that point of time, so do not worry about that. So as you can see, subnets
connects to a route table. Route table goes to a
network configuration. So we're not worried
about this at this moment about the
architecture of the resource map, we now know the
the default stuff. Now, when you create
any EC two instance, which I will do it
on the next video, you will actually
see the relevance between the subnet and the VPC. So you will see the EC two run instance which
is running currently. So if you just select
that, you will have this EC two instance
hosted on some VPC, right, because you can
see the available t zone, it is hosted on US east one A. It says A, which means
that it is hosted on US East one A
using the subnet. So this is pretty much
how it is defined. So it is host one A. So can we select it? At the time of EC two instance, can we select a subnet? Yes, you can select a subnet when creating a EC two instance. When you go to subnet over here, so you have a lot of
subnets over here. So there's a lot of subnets
Sama created by myself. Yes, you can create
multiple subnets. So I've created extra
subnets over here. You can see that over here. So you can see it says This is network border
group east one east, one east one because
that's basically the US East one is the
name of the region, so this is the region name. But here in the availabt zone, you have east one A, east one B, east one, C, if you just sort
it by this, right? You have two east one
A and two east one B. Now, why? Because I've created a subnet for myself.
So you can see that. So this is created for the extra sorry,
extra application. So that's something which
we will talk in sometime. Okay. In this section only,
we will talk about it. Okay. Likewise, we have
created multiple subnets. So yes, to answer your question, it is possible to
create multiple subnets in under one availability
zone as well. And yes, it is possible to
choose your EC two instance, uh, to be hosted on a
specific subnet over here. But it's not possible to
choose your backup location. So just for example, if
I'm hosting on E two, the backup can be anywhere. So if this server fails
from the AW standpoint, it can be migrated
to any of the hosts, so it's something which
we cannot choose. But there will be
obviously backup going in. And you can also see
the availability of IP address for the cider. So every specific subnet will
have a cider information. So that's the base criteria for you to assign IP addresses. On the servers hosted
under the SDU. Okay. So that's the range
of the side up over there. So, likewise, you have all
this information towards this. On the next video, we will
create an EC two instance. I hope that you understood
the basic principles of VPC. What is VCP sorry, what is VPC and what it does, what are things under the VPC? So the basic overlay you
understand on this one. On the next video,
we will understand the basic overlay of creating EC two instance and
accessing that. Thank you again for
your patience and time, I will see you on
the next video.
30. Labs Access Keys: Hi, guys, welcome back
to the next video. In this video, we
will talk about this particular topic over here first understanding
access key, password policies,
credential storage, as well as working on these two services as part
of the credential storage. Credentials storage is basically where you're going
to keep the storage. So we're going to do this on one video and then we're
going to do this on two other videos in terms of access key
and password policy. We will start with Access
Key on this video and then we'll go with the password
policy on the next video. Now, do remember that the following videos which
I'm going to show you, actually, we have completed
a section over here. So here we will be actually creating a user in
terms of hands on and we will be
actually assigning the policies to the user
and stuff like that. Now, when I'm showing you hands on towards
explaining these aspects, I'll be showing you the
user already created. So don't worry about it. This is just an extra bit of my own. I'm putting effort over here to show you how this
actually looks like, but there will be
no practical hands on is required for
this certification, so don't worry about it. It's just the extra effort
I'm going to put in towards showing you
um, item over here. So if you feel like, Oh, I'm not created that
user, that's okay. You're going to
create it anyways. After three to four videos, you're going to do that anyway. But I'm just going
to show you because this doesn't really
require hands on, so I'm just going
to show you that. So to disregard all those things that the user is already, I have it because you're
going to create it anyways later down the line. So let's look at this diagram. Firstly, in this video, we
will see about access key. Access keys are a
pair of credentials, it involves a key ID
secret access key, that are used to assign
programmatical requests to use API via AWS CLI,
SDK, STAPIs likewise. Now, what are
these? Um these are basically a type of ways
to access your AWS. Okay? Now, for you to
access AWS through, AWS CLI, which is the software you need
to download from AWS. SDK software
development kit, again, download from AWS and
install it on your system. Rest APIs are, you know, they are like programs
which is run from client and it will
actually goes to your AWS. For example, you can say, when you're accessing
open shift at Openshift from your red
hat website to AWS, you have to create this access
ID and key, create a user, give him necessary permission, and then create this key ID and secret access and give
it to on open shift. What Openshift will be doing
is it will use the um, key ID and the access name, and it will use the user name which you've specified, right? Um the one which actually
has the policies for the correct aces for it
to create resources. And then it will
access the AWS API, which of course would have
given it as part of the user, and it will access AWS
using those credentials. Allr? That's call Access key. So majorly, majorly the question
would be like um, well, you have a company
and the third parties wants to access
your service using programmatic access or
programmatical request through one of your services
which is running on Aws API. Which of the service on the
below would be the right one. So they will be asking
you SSSech key. They may be asking you
username and password, and they will be
asking you access key. So you need to select in which situation you need
to select Access Key. I'll just show you that much makes it much makes
sense when I show you. I'm searching for IAM, Middle clicking on IAM. Over here, you have the user, you have the dev user. Normally, everything
like programmatic access where an API is calling, uh, the third party
vendors are calling using the rest protocol
or something like that. You have to actually give the user name of the user
which you have created. Normally a user
would have policies. Don't worry about it, we'll be creating this later
point of time. So you'll be assigning policy. So in this policy, it says that this particular user has
full access on EC two, S three full axis,
Lambda full aces. So we have given full access to only these three services for this user named Dev, okay? Now, how can a person
access this podgula item? As I told you earlier, you can actually access
this de user using the console because
it is enabled. You can see the console
link over here so you can copy that
access this user. On the section where we will be actually
creating the user, we will be actually accessing
the user using the console. That's one way of accessing. But then not everyone will
be happy to access console. Like for example, your programs, which is running
from open shift to AWS will all be run
through terraforming. Now when you want to run a
service from terraforming, you cannot actually run
it with the GI console. Then you may have to um
run it via ST API calls or something which is using program to connect
with another program. That's what we normally call it as program medical access. Now let's talk about
another way of accessing. So as you know, multifactor
authentication is nothing but adding another layer of authentication on top of your existing password which you can actually
access your console, and it is not pretty much
related to our topic. Now let's come to
access key over here. I have access key over here. Let me disable and
recreate this access key. I'm so sorry I
activated it again. So I'm going to do it this one. I have to type the name of
the access key to dele it. Now that access key is disable. Now, what is access key? Using Access Key to send
programmatical calls to AWS from AWS CLI AWS tools, Power Shell, sorry, AWS
tool as in Power Shell, SDK or directly via AWS
APA calls or SPA calls. You can have maximum
of two keys attached. One is, either you can have it active or
inactive at a time. Now what this means is
that if your program wants to connect
with the program which is running on AWS, you need a great access key. Now you have other methods of
communication like SS such, it is used for AWS code commit, but not used for programmatical
calls over here. You can see the
description over here. You can use HTTP
get credentials, but this is used for
authenticating HGTBX connection, which is code commit again
for code commit repository. You can use Amazon keypass over here for Cassandra
for Apache Cassandra. Okay. And then you can use x509 for SOP
protocols. All right. So each one of these authentication methods
is for a dedicated service or a different way
of connecting to this particular user
from third party items. But if your question says specifically
programmatical calls, then you need to think
about access key. When your question says it's all about code comment then
you have to think about either SSH or G. When your question says that you want to use through key pass or, uh, for Cassandra,
then you have to think about credential
from Amazon key pass, um, sorry, key space. And then if it says about, um, the soap protocol
and stuff like that, then X five online certificate. But don't worry, this is all won't come as part
of your question. The major major stuff
will be access key. Now, create access
key, just click on it, and you can see
different access keys, which you can create for
your command line interface. Local code when you're running a software development
kit, then you can see, when you're running an EC to service Lambda or you want to access A AW service to
another AWS service, you can use application running AWS compute service or a
third party, for example, if you are using OpenShift
for that example, you can use third party, and then application
running outside AWS. Likewise, you have
others as well, like Nevers to access key kind of important
information over here. Either way you
create any of this, it's going to ask for
description value, and then it's going to
create a secret over here. Now, access key is going to
have two items over here. One is the access key, another one is the as
sorry, secret access key. It's going to be your
actual password. So this is the user. You'll be using what you
will give for an example, if you're using open
shift in the other hand. Open shift you will say, use this user name Dev
for authentication, it'll ask you, give
me the access key, give me the secret access key. You have to copy paste both
of them over to OpenShift, and you have to give
your account ID as well. So it will use all
these combination, so it will use account ID. The user name, the access key, secret access key, and
then connect to AWS. Now, do remember that OpenShift will have its own
requirement of the policies, which needs to be
part of the dev user so that it can perform
items for example, Openshift will execute a
lot of terraforming code. On your EC two instance
or your items like that. Open shift will say, I need administrative privilege
because what it will do to run terraforming
scripts, to create VPCs, to create load balances, it'll create a lot of stuff to have your open shift
installed on AWS. So for that, it requires
administrative access. It doesn't want root taxes. You should not give root
access to Access Key, anyone who asks for it. Okay? So you should
only create a user, give the specific policies
which it is requesting for, and then create Access Key
and give it to that API. All right. This is
pretty much what I want to show you in
terms of hands on. Um, so do remember
that the use case for this access key is used by IAM user and IM applications to interact with AW
services programmatically. Now, this programmatically
word should remember. When you hear the word
programmatically, the answer should be access key. So you need to make sure that you need to rotate
the access key regularly to ensure security. Avoid embedding access
key in the source code. This is making sure
that, you know, I IAM rules, you know, created as part of this. We'll talk about that later. So likewise, these are some of the best practices you can
use in words Access key. I don't think I have anything
else to add over here. So if you have any
questions, leave it on the questions section. I'll be mostly answering to all your questions
over the weekend. Thank you again for
your time and patience. I'll see you on the next video.
31. Labs Credential storage: You guys welcome back
to the next video. In this video, we are going
to see the last stretch on this particular topic which is your secret manager
and system manager. And we are going to recover
credentials storage. Now, what do you mean
by credentials storage? A credential storage is
a location where you can actually store
your credentials. So you can store your
credentials in two ways, a secret manager and the system
manager parameter store. Now, what is this two items? Why we are talking
about two items. There are two reasons
for this talking about these two items in terms
of credentials storage. In terms of
credentials to reach, there is a dedicated service
called Secret Manager. Now, if I just go
over here to secret, I get the Secret
Manager over here. Another one is system manager. I'm opening both of them, so it's easy for me
to show it to you. Now, what is a secret manager? Secret manager easily rotates, manage, and retrieve secrets
through their life cycle. So if I create a secret, like API token,
which I've created on that user right in the
previous video we've seen. Now I can store the API key and access key into
the secret store. But the problem mist it
has 30 days free trial, but then per secret, you're
going to pay 0.4 per month. It's not that expensive, but still something to consider. You can also use
API calls and route CNM password through
the secret manager by giving the name
of the secret. Rather than explicitly
providing the key and the secret directly
to any of the APS, you can actually
route it through the secret manager with
the proper privileges. Now the secret manager will
have its own usenMPassword, and the encryption
of this secret manager will be part of KMS. You have to pay the
KMS service also because the access to this particular secret will
be encrypted through uh, your key store management
service from AWS. Okay? So now this is a default one which you have over
here as a secret manager, and that's created
as part of your KMS. Now you have options over here. You can actually hold the key information
towards your RDS database. You can also store Amazon
Document DB database. You can also store, um, red shift data warehouse. You can also store credential
of other databases where you can select
MSQOLPostSQ server, and then you can actually
give the server address, database name, port
number if it's running. So the service will
directly contact us secret manager to
retrieve the secret, which is going to be configured on the next step, step two, or other type of API keys, the one which you're
creating for a user. So that also you can actually put that key over here
with the key key value, and then you can um go to the next step and
configure the secret. And configure rotation. So let's just go with this test test, and then
go to the next one. And here you actually give
the secret name over here. So whatever the secret name is going to be coming over here. And then when you
go to the next one, you can actually see whether
you want to rotate this. Now you can have the
rotation builder over here every 23 hours or every two
months once you can rotate. Now, the beautiful thing about this is like every
time you do that, you can also call a Lambda
function over here, and then you can just review the item and you can finish it. Now, they're not going to
ask you how to do this with hands on or the options
when you do the hands on. So I'm just giving
you extra information on how to work with
a secret manager. Now what is a secret manager? A secret manager stores your secret securely and manages
and retrieves the secret. And it also rotates the secret. Okay? So that's the first
primary thing about it. Okay? Now, majorly it is used for holding database
credentials. So you saw a lot
of database were coming into the action item. So if a question comes, which of the following
items which is preferred storing in
AWs secret manager? And they may have a lot of questions which may
be related to it, but one of them will stand out
will be database creditis. Look for that information. API keys also we can store it
and sensitive information. Whichever you consider
sensitive information. Okay, or else they will ask question like this where I have a database credential and my customers asking me
to store it securely. Which of the service
would you like to use? One of those services
would be secret manager. Okay. So do remember
this word. All right. Now, let's talk
about the next one, which is about the feature
of the secret manager. It rotates your secret. It uses KMS service
for rotation. I'm sorry, for holding the
key securely using KMS. And then it has fine
ointed access via IAM. Okay. So very straightforward
answering towards it. So in terms of system manager, system manager is used
for a lot of things. Okay? Like for example, you can see Op Center over here, Cloud watch, you can actually configure application
managers over here. You can also see
change management as part of the system manager. You can also see patching also major part of
your system manager. But one thing towards storing your credential is
your parameter store. Now, do remember
parameter store is part of your system manager, and what is parameter store? Now, parameter store is a
centralized storage and management of secrets and configuration data
such as passwords, database strings,
and license codes. You can encrypt values or
store it in plain text and access secret at entry level, how you would like to do. Now, there's one more feature of the parameter store is that you can create a new
parameter store, specify the parameter
type and value. And refer the parameter
on your code or command. You can easily refer it also. Now, to click
Create a parameter, so you can actually
give the name of the parameter over
here, description, whether you want to store up 2000 standard parameters up to a value up to four
KB of a file size, and parameter policy is sharing
with other AWS accounts. You can share this as part of the other AWS accounts
which you have. Or you can just go for
Advanced one where you have 100,000 advanced parameter, which about eight KB, that's going to be
extra charge for you. Now in terms of what value
you are planning to store, you can put in your
database values, your license keys
and everything else. And you can also, you know, create a secure
string, using KMS. You can encrypt it through a secure string or
you can actually use a data type as text and then you can
put the value over here. And this will create
a parameter in which you'll be actually
holding the value of it. Now, the purpose
of parameter store is that another service of storing configuration data and secret files can
be holded over here. Now the feature of
this is supporting both plain text and
encrypted values as well, integrating with AWS
KMS for encryption. I would say an ideal use
case of this is storing application configuration
values such as environment v values or database
strings which you want to keep it or small secrets which you have over
here, you can keep it. So the major purpose or the best practice is to
use the parameter store to manage non sensitive
configurational data or secrets that does not
require frequent rotation. So parameter store is basically
for storing parameters and non critical secret values like your licensing
information and other stuff. How it stands out from
the secret manager, secret manager will be using KMS authentication
or encryption. So there is no option, um for you to store
it without KMS. We have seen multiple
options over here, you can see that we have the
plain text or key value. But anything you enter over here requires a KMS encryption
over here, you can see that. This is not an optional
parameter. This is a mandatory. Everything which you
store as part of this will require KMS
authentication or I'm sorry, encryption, where the
value will be encrypted. Now in this one, it's optional one where you
can secure it using um, KMS. So secure as an encrypt
using the value using KMS. Okay But there's option of
not doing that as well, because it basically
stores parameters. Okay. So there will be questions coming
up. Very confusing. They'll say, I want to store some secret which
is basically not so, um, you know, not so
frequently requires rotation, and it's not that secure. It's just the licensing data. Would you what are the
services I would use to, um, you know, stored this value? It could be one of the service
would be secure manager, other service will be parameter, parameter store, and
then you need to choose parameter store because it doesn't require rotation, it doesn't require encryption. In that situation, when
encryption is an option, go for parameter store. If encryption is mandatory, go for secret manager. Questions will be
a little twisted, read the questions twice and twice to understand
the difference of what we are actually getting into and then answer the question,
you should be fine. Thank you again for watching this video if you
have any questions, leave it on the
questions section.
32. Labs Identifying authentication methods in AWS: Hey, guys. Welcome back
to the next video, and this we are
going to talk about identifying authentication
methods in AWS. One of the important
questions over here as part of this particular
task two or three, we're identifying AAWS access
management capabilities is about authentication methods
in um identifying that. We will try to do some hands on, but if you're looking for reading the users or
something like that, we're going to do that
later point of time. Maybe in a couple
of videos we will be working on the
hands on towards it. So in this video, we are going to identify
an architecture over here, which is going to talk about identifying authentication
methods in AWS. This is one single video we'll cover all
these three items, and this is something
which we have seen over and over again, so it's going to be much easier for us to go through this. Now, identifying
authentication methods in AWS, meaning that way of identifying different
methods of authentication. One is your multifactor authentication that
you cannot deny, which is very much required and it adds an extra
layer of security by requesting user to provide two forms
of authentication. One is your password, another one is a code, which generates every minute, and it rotates every minute
and you will get a new code, either using an
hardware token device or using mobile app. Now, a use case is basically enabling the MFA in IAM user, which we have
already seen enough for the previous videos, as well as you're
going to see it on the next video as well. I mean, next hands on in
terms of user creation. So this ensures that
the root user or the user which you
are enabling this to is even more secure. And even if the password
gets compromised, people won't be able
to get in because the five digit code keeps on rotating and
changing every minute. The next one is Iidentity
center from IAM. Formally called as
AWS single sign on, that's the previous name to it, but recently called
as Iidentity Center. Now, Identity and Access
Management Iidentity center enables centralized management of accessing multiple
AWS accounts and applications using existing
corporate credentials for, um, you know, direct
active directory, if your corporate using
active directory, you really don't
have to you know, import all those users in
active directory to AWS. You can actually, um, you know, you know, add those active directory as part of the identity center. So it can use an external identity
provider using SAMO 2.0. The questions here would be about how do you
integrate your company? So it will ask the
question will be formally, like your company already has an EDAP service which is running with a lot of
users and passwords. Username and passwords. Now, how do you integrate that? Which service would you use
to integrate that with AWS? That is your identity center. Okay? So that's how the
question will be framed. Basically, what it means is like the typical use case of
this identity center. It simplifies the authentication and access management for large organization with
multiple AWS accounts, allowing users to authenticate one and access all
assigned resources. Now, you can also use Federation to access SAML based
authentication, which enables users to login with their
existing credentials, which is configured
in their account. If you go to IMR, I just want to give you some, um, you know, view. You can see that related
consoles IM identity setup. This used to be SSO previously, a single sign on, Uh, but it has changed
to identity center. Now you can actually enable
the identity center. So when you enable it,
connect your directory or create user groups
across I mean, for use across AWS. So if you see that manage
across workforce across AWS account and access workf integration
applications, likewise, if you have multiple accounts,
you can actually use this, enable this and then use it to create and work with
multiple user accounts. All right. So now let's
talk about the next item, which is your cross
account IM rule. Cross account IMO is basically a rule which allows
user or service in one AWS account to temporarily
assume an IAM role in another AWS account without sharing long
term credentials. Now, when a team from multiple AWS account needs access to resources
to another account, for example, if you have an
organization and there is this organization
named organization has this account called ANB. Like, for example, over here, one of those is
organizations you see that. So by default, you wouldn't
have enabled organization. You can also enable it's free. So there will be
option over here called enable organization
just do that. It should enable very quickly. So now you have multiple
accounts over here, and you want access users across the cross net
cross account users. That is what we call
cross account IAM rule. In this case, the the developers in the account A to access the S three bucket
on the account B, they can assume a rule which is appropriate which has
appropriate permission for that. That's basically called
cross account IAM rules. The questions would
be typically like in my environment, I
have a developer, where a developer
on the account A requires access
to S three bucket to write it on the account B. Which kind of service
I should use or which kind of feature
I need to use to enable access for this developer and give him access to multiple, um, you know, multiple accounts
under my organization. If this kind of question comes, cross account IAM rules.
Think about that. He needs to have access
on both the ends, so we need to talk about
cross account IAM rules. All right, guys,
thank you again for watching this video.
I see on next one. If you have any
questions, please leave it on the
questions section. I'll be able to assist you then.
33. Labs Password policies: Hey, guys, welcome back
to the next video. This, we are going to talk
about password policies. Now in terms of password policy, we are going to talk about the
definition of the rules or the requirements
that the IAM user must follow while creating
the password policy. Now for access keys, which we discussed earlier, now the password
is not set by you, okay it is automatically
generated. But when you talk about
the passwords which is created for a user, when he or she is logging into the console,
that is set by you. Okay? So this is set by, you can go for
resetting password. You can either select autogenerator password
or custom password. Okay. So either of
which ways you will be actually using the uh password. Auto generator password
means that you will have a strict
password without you, um, you know, creating
a stricter password. But when you go for
custom password, this is where the
requirement is to be, um, specific requirements. So in a way, I cannot actually create a
password called password. If I create something like that, you can see that the password
policy is not meant. What the password
policy is saying is, it should be at least
eight characters long, and it should have upper case, lower case numbers, symbols. I should have at least mixture
of this three following, at least three,
either those items. What we're going to
do is P at the rate. SSW, zero RD, at rate 123. Now, this has a capital letter, small letter, at the rate, which is a symbol,
and then zero, which is a number, and then
it is eight letters long. You don't really need
a rate one, two, three also because this should actually satisfy
complex password. We can also enable user must create new
password when they sign in and also you can enable
revoke all the consultation. If anyone would have logged into this user at this moment, those people would be logged off when you click
on Reset Password. Now, once you have
reset the password, you will be informing the
customer and they will be actually working
with a new password once you have securely
communicated to them. One more thing which is
recommended as part of the password policy is a
multi factor authentication. Now, just click on Assign Multi factor authentication
and here you'll be able to see the different types of multifactor
authentication over here, which talks about
either a security key, a pass key, a physical
key or a device, Authenticator app using
your mobile and you can actually install
this authentication app, or you can have a
hardware token, which is TOTPTken
which is again, hardware device over here. All right. So by using
multifactor authentication, your user will be even
much secured than before. Thank you again for watching
this video. I'll see you x
34. Labs Types of identity management: Hey, guys, welcome back
to the next period. And this video,
we can understand the types of
identity management. Now, we have already
spoken about the identity center over here
and cross account AM rules. What we understood about
identity center is that not just you create
sname and passwords in AWS, but also if there is an um, LDAP or some kind of authentication
already in your company, you can integrate it
through Idntity Center. Now we can understand about the types of
identity management, for example, federated identity. A Federation means basically addition to existing
infrastructure. Let's talk about the
different types of identity which is used on AWS. Now first is a local
IAM user and groups. Now this is basically
means that I go to IAM, I create a user name I'm sorry, the session has expired.
Just give me a second. Sorry about that. I got back. So I think I have to close all the browsers and
just reopening it. Now here, as you can see, I have the users
created over here, the groups created over here. This is basically called as
the local IAM user group where it is created at the local level within
your AWS itself. We are manually creating it. The next one is the
Federated Identity Manager. Now, what do you mean by
federated identity manager is that the federated identity
manager allows user from external identity provider
or DAPs to access AWS resources without creating a separate IAM user
on the AWS side. This federation of identity from corporate directories like active directory or
third party DAPs like Google, authentication. All this is basically
called federated ones. You basically federated using
a standardized format like SAML 2.00 or open ID connect,
likewise, you can use that. Basically how it works, AWS integrates the external
DAPs to authenticate users. Authenticate user assumes
temporary roles on IAM with the permission
defined on AWS IAM policy. Likewise, you can use this. So in a typical use case, you can use it on a larger enterprise organization
which already has a corporate credentials
in their active directory, where you don't really
have to recreate them on your AWS standpoint. That's what we discussed
earlier as well. Now, let's talk
about the next one, which is the cross account
rules, which, again, is something which we spoke
earlier in our session. Now, cross account rules are basically rules
which has multiple, um, you know, which can work on multiple accounts within
our organization. Okay? These cross acco rules
allows users or services to run on one AWS account with an organization to
assume roles on another. Okay. So this is basically an identity management
simplifies access across multiple AWS
accounts without needing to duplicate
users or credentials. So this is something which is
called cross account rules, and this is what we
discussed earlier as well. Then now the next
different type of identity management is
service link rules. Now, what are
service link rules, Service link routes is
a unique type of rules that links to critics
directly to AWS service. These rules allow AWS service to perform action on behalf of you. For example, if you go into the IAM user and you
go to rolls, right? You have so many
roles over here. If you see that, let's
take this example role. AW is service role
for auto scaling. Now, when you open this, you can see that auto
scaling service linked role. Now when you click on
this, you can actually see the policy and also you have the
explanation over here. Default service linked
role enables access to AW service and resources used
or managed by auto scaling. So everything which is
part of your AW service, which is used and managed by auto scaling is part of the
service link rule over here. According to this rule, it is linked to the policy of auto scaling
service role policy. Now we expanded, you
will actually get more detail of what
actually is the action, and it allows all these
actions over here, which is about autoscaling and enabling and
stuff like that. So you can see that
all auto scaling comes into picture of a hlastic bands or cloud watching, all
those things over here. So this is basically called
a service link rule. Now, how it works, the role automatically gets created and managed
by AWS service. Now these are
something which you are not created so far, right? Because these are automatically created called
service running role. Whenever you create a service, that service will link a
specific role to it and that's when it is associated with a specific permission
that is needed for the service to run properly. So a service rule for
auto scaling will have permissions
for auto scaling, as you can see over here. That's called service rink rule. Now the next one which we're
going to talk about is the identity provided for web and mobile
application cognitive. Now there's another application or service called
Amazon Cognito. It is used for managing
authentication and user identity for
web and mobile apps. When a question comes in for identity provider for
web and mobile application, then all your mind should
go for Amazon Cognito. Now, Amazon Cognito supports
identity federation through third party like Google Facebook and Enterprise
directory to Samo 2.0. Now, what that means is
that when you log in to, um, um, any website
sometimes, right, you will see that you will have option of creating a
user for that website, or else you can directly login from your Google ID or
Facebook ID, right? Now how this is linked using this particular identity
provider from Amazon Cognito. So using this identity
provider from Amazon Cognito, you can use your the options of enabling Google
based user creation. Facebook page user creation
and logging directly into the user using
Amazon Cognito. Cognitive allows
user to sign into an external DAP to their own username and
password combination. Once authenticated, they will temporarily create a credential via cognitive identity pools
to access AWS services. Now, this is getting
more complicated, but do not worry about it. It actually they won't be asking you so many
questions around this. What they will ask
you is like when I'm setting up an application for my web and
mobile application, I want to give my
Google authentication. In this situation, which kind of authentication or which kind
of service I would use for. Then there will be a question
like Amazon Cognito. When you see that,
that is your answer. Okay. So likewise,
questions will be asked. But in terms of
identity management, these are some of the concepts. These are something which
you should remember. These are the names which you
should remember for sure. I will attach all the
documents which I'm just talking about on a
different video, different topic so that you
can actually refer that and, you know, understand
all about that. Thank you again for
your patience and time. I will see you on
the next video.
35. Labs Users Groups & Policies: Hey, guys, welcome back
to the next video. In this video, we are
going to do hands on session for user group,
roles, policies. All these action items
we cannot do hands on. Let's go ahead and
do the hands on. For that, for you to
successfully create a user, you need to get to IAM. So there's a service called IdentityNAccess
Management called IM. So you can see that over
here, middle click that. Now you see that you have opened a new tab over here with
all the information. Now currently you are
a root user with MFA, which is enabled,
so that's a tick. But there is this
warning sign saying that deactivate or delete
access key for root user. There seems to be
some access key, so it's asking us to deactivate
or delete the access key. Let's go to the user. Users there's none, and then
there's this user group, and there's this role. There's so many roles
over here and policies. Now as I told you over here on the previous
interaction video. Now policies are
pretty important, so that's the end item. Policies has the
JSON file formatted. Now if you click on it, you
will actually see the um, policy permissions over here, you can actually see that
via JSON format over here. Now the policy has certain
type of actions over here. These are the permissions. Now that is given as
part of this policy. This policy called Access
Analyze service rule policy has these many um,
items under it. Okay? These are the
predefined policy, which comes by default with AW, so this is not a custom one. You can also create
your own custom policy, which we will talk
later point of time, which can have your own set of permissions of your applications or services which you
have as part of AWS. So we'll talk about that later. Now going over the rules,
now, as I told you, it's for providing
temporary permissions to your services, users
or appliances. Now a role name will have
a policy attached to it. Every role name will
come with a policy. For here in this particular role called EC two Spot
fleet tagging Role, you can see that similar
name for the policy also, Amazon EC two spot tagging role. This is just to
identify what it is. More or less, most of the policies which
you're going to have, you're going to have roles
associated to that as well, and those roles will have the
policies attached to that. Now, in terms of users, you're going to
have users created, which we will do in a moment, and then you have user
groups over here. These are basically groups which is users are going
to be part of it. Click on user, click
on Createser and you can specify the username
just like in this case, I'm going to create
a development user and provide access to
management console. This makes sure that you
provide access to manage. Console. Now here, are you
providing console to a person? Here is a user type is a specific user in the
identity center recommended. I want to create a IAM user. Now, you can also create
I user in this case itself where in which you can actually create a
password of this. You can create autogenerator
password or custom password. So I'm going to give d123
as the password over here. I can see that the
password is specified. To remember that the only time the password will be showed
when you are setting it up at the time
of the next time, there's no way that
you can retrieve the password from the user tab. You can see that user must create a new password
when sign in just disable that stick
with the same password. And then D at the rate at the rate of
12345678 is the password. I've copied it in my
clipboard, so I can paste it. Here you can create any
kind of policies you can add to a group which
has policy set on it, or you can directly attach a policy to the
existing uh user. Right now he doesn't have
any policies set it up, so just going to
leave it as default, so I'm just going to go
with the next screen. Now, after I go to the last screen over here,
you can see the review. You can see the
review over here. You can review that
whatever you're doing. So as of now, we're not given any kind of
permission over here. So reset password is now, custom password is set, then let's create user. Now this has created
the user and you can actually see the
password over here. Now return to the user list. Now do remember if you skip
this step to the next step, then you will not
be able to retrieve the password anymore time. So you can see that pretty much when you click
on this, right? You go to security Credential, um there's no way you can
actually retrieve a password, so you can see that over here. It says that this
user has never logged into the console because
we never logged in. So when you click on
Managed Console Ax, you can revoke Consolex, you can reset the password, but there is no option of
getting the password from here. There's no way, not
just from here. The only way you can
do is you should have downloaded that file which
it was asking you earlier. So apart from that,
you cannot do it. But the other thing you can
do is in resend the password, you can create a custom password and then put the
password over here. That's another option
over there for you. Now, you can also set a
multifactor authentication for this user by clicking on ASI
multifactor authentication. In this way, you can use a
Authenticator app over here like Google Authenticator
app and you can set up your authentication
for your debut. So now that's really up to
them whether they want to have a multi factor authentication
able for that use or not. As of now, let me actually
give a try over here. Let me just open a new tab. Let me go to aws
dot it's on.com. Now, you just click on Login. A few things which is required over here for you to
successfully sign in. You cannot use the root user, so you need to give the
account ID over here, which you can take it
from this location. Put it over here, Dev. Password is Dev at rate of one, two, three, four, five,
six, seven, eight. Then just click on sign on, you should be able to
access your A disconso. But do remember,
you're not be able to access anything over here because you've not
given any kind of policies which will
let you do anything, cannot actually, you know, successfully create any bucket
or something like that. You don't have permission to
get one iphon account ID. You see that permission
is required. So you cannot do
anything over here. Okay? So you can actually
look at the concept. That's the only
thing you can do. But those performance
related items, all those cost related items, whatever the EC two instances
you have out there, you'll not be able to see it, because this is just a dummy
user which you created. So you will not be able to
see anything over here, you'll get red
colours everywhere. Now, how do I give permission? There are two ways
you can actually give permission to this user. You can actually assign the
permission directly under the permission under add permission and then create
a add permission over here. We'll talk about inline
permission later point of time. So you can actually attach
the permission from this. So you can give administrative
access if you want that administrative access for
your this particular user. So it will give you
all resource, colon, all so I'm sorry, star asterix, which
means that everything. Okay? Everything action is star, the action can be everything. So administrator will have exclusive permission on
every aspect of your things. But then there is some thing which is
reserved for the root user. That even administrator cannot
access to remember that. Okay. So it's not like you will have total control as root user. So administrative user is
not equal to root user. So root user will be having
different privileges. We will talk in the later class and we'll try to
understand that how root user is going to be unique when compared to
the administrative access. So for the developer, so you need to
choose application. So you can select ST.
What do you want to do? If he wants to have full access
on S three, you can give. You see if this
developer is towards the S three administration
or tasks around that. If this developer needs
EC two instance as well, so I've selected that
option, I'm not disabled. So if I just select EC two, I can give EC two
full access as well. I can combine multiple roles to the developer
and give him that. So if he needs access
to Lambda as well, Sorry, I got this
spelling wrong. We want Lambda as well, you can actually give a full
access to Lambda as well, so I can see that over there. So if this guy is going
to be just a analyst, he's just looking for data. Then you can give all these
option with read only option. You have multiple
options over here. I mean, not to have
a simplified method, I mean, not to clump sum it or, you know, complicated
because you have access to each segment over here
of the resources over here, you have Lambda role. Replication, MSK,
which is your Kafka, and then you have Kinesis. Likewise, you have smaller parts of Awamda service over here. But not to overcomplicate it, just have the full axis
and read only axis. You have the options over
here which you can pick from. If not, you can
actually go back to the group user and you can actually create
a group over here. Here I'm going to create
a group for developers. Developers. Now, this
group is going to have this particular use
name I've selected and then I'm going to have to search
for Lambda full Access, EC two full access so that
they can create service for their JVMs and then they can have full access on S
three service as well. So they will have access
to create buckets and stuff like that because they can put their data in there. I've created a group, I've added user while I
created the group itself. Now if you have
the developer user doing all those activities, let me just refresh
this page now. Now, you will see that
he has limited access over I'm sorry full access
over EC two instance. Now you can launch instance.
There was no issues. Never popped up before. And when he is trying to create an S three
application over here, it should work for him as well. So previously, it was not able to view
these items, right? So now, you can view that. You can get inside some of the buckets as well
to see its content as well. You can also create buckets over here to add some items in here. Likewise, you can do all
the options which she was prohibited from earlier. So this is what the users and rules,
policies is all about. Policies is basically made out of the resources and
the actions you can perform for the resources and that actually defines
a service or, um, you know,
whoever is trying to access the AW services. So user is basically a tool. So if you want to create
a user for a service, for example, as I told you on the previous example
about open shift, right? Now, if you create such
a user for open shift, so I'm going to create open
shift, um, service access. So here, do not provide the management console because open shift doesn't require
management console. So previous versions of the same IM tool had option over here where you can
select programmatic axis. But it says over here if you are creating grammatical axis, go through access key or
specific service credentials. So we have to go
through another step. I'll tell you that. So here
I have this group over here. I'm not going to
go with the group. Now here, it was telling me it needs administrative access and needs access to
all the services. It doesn't require
root user access. No one none of the service would require
you to have the root access. So do remember that do not give your root
access to anyone. Do not create this.
I will tell you how to do that just in a moment. Click on Create, and then
you see that user created. Now this user is basically an administrative
user because it has administrative
policy attached to it. Now I'm not making it part of a group because
I want to have this separately from
my existing um, you know, groups which
I'm creating for users. So now, this
particular user called open shift service
access is for accessing. So now, how do I
give this username? Now I know the user name. What is a password?
I need to give this information to open shift
right on the other side. This example goes with any third party application doesn't have to be
about just open shift. So now, multivctor
authentication, it's not for open shift, because that's a second level of authentication for your user. So that's not it. Access key, very important. This is the one. So you can see that use
this access key to send programmatical calls
to AWS from AWS CLI. But this is the use of it. So when another tool
is connecting to AWS, it will actually use the
access key over here. So click on Create Access Key. And from here, is it
for Commanda Interface? No. Is it for local code? No. It is for running an application on
AWS compute service. No, this is not between
service to service as well. See that the user can be created for service
to service as well within to service to ECS
service or to Lambda service. So third party service, yes. So this is open shift, which is a third party service, and I'm selecting third party. There are other options
like application running outside Ara Blaier, like in your own infrastructure, right, in your own data
center or not listed. So third party
service, I understand. So it says it is
the best practice to create a temporary access like IAM rule rather than long term access
keys and stuff like that. But this is much
recommended for open shift. So tag value and create the key. Now this is the only time
it's going to show you the access key and the
access password over here. These are the two action
item which you need. When you click on done, this is going to be loaded over here. You will see for this
open shift taxes, you've created this key and the password will
not be shown again. You have to delete, deactivate, delete, and then re create
if you forgot the password. There is no option
whatsoever. All right. Now I'm going to delete this
because I don't need it. So here is I have
to copy paste it. So here, first deactivated, past it and deleted. So this is how you delete it. So here, I'm going to
remove this user as well, and there's no charges on
the number of users you're going to have in AWS. So you can have any
number of users. Right now this is pretty
much how you can use this. Now on another video, we will talk about
identity providers because this video is going
long. Thank you again. I hope that you
understood the hands on section on how to
create users rules, and you were able to
relate and reconnect the previous topic which
is very important for us towards our exam because there'll be at least
three to four questions coming from this particular
uh group of activities. Thank you again. I'll
see you in the next.
36. Labs AWS Knowledge Center: I welcome back to
the next section. In this section, we are going to talk about identifying where you actually get a lot of support in terms of AW
security information, and we are going
to look at towards the multiple ways of
getting this information. So firstly, we're
going to talk about AWS Knowledge Center. Now, just go back
to document over here in terms of our cloud
practitioner or exam guide. So here it is very
important for us to explore these services
or features of AWS. Some of it talks about
the understanding about where you can gather more information about the AW secure information like logs, security centers,
knowledge centers. We're going to look
at each of them in a separate videos and
then we're going to cover all of this as
part of this training. So now we are going to start
with the Knowledge Center. Now, Knowledge Center is not
within your AWS console, but then you need to go to Google and type
AWS Knlege Center. This is AWS repost, and one of those is
about Knowledge Center. You can also get
some information about this quick information. Like for example, you want
to know more about security. More information about security. So when you type this, I'm sorry, I was a typo here. I want to know about security, one of those would be
about documentation and knowledge articles. This basically goes to
the Knowledge Center, which actually has a
redirect to repost to AWS and goes to Kledge Center. A kind of question you're
going to ask on the chat will be redirected
either to documentation, which goes to dost aws dot Uh, amazon.com, or it goes to
the repost Knowledge Center. Leg Center is going to answer
most of the questions, so frequently ask questions
right in your disposal. This is going to make your life easy in terms of
following some stuff. So the Knowledge Center, you can directly access and then search it exactly like this. Or you can do it from
your console as well. I've shown you both options. Let's just do some understanding
about Knowledge Center, comprehensive
collection of articles and FAQs that covers wide range of AW services and topics including
security best practices, common issues, and
troubleshooting tips. Now, how it's helpful, it provides you the
detailed answer to frequently ask questions about the security configuration, troubleshooting and compliance. Now, do remember that
the Knowledge Center is not just about
security stuff, but it involves overall, all those things which
you do normally on AWS. And as you can see over here, this is all part of what
you normally do on AWS. So it gives you the
complete step by step information on how you can actually work with an
issue and resolve them, as well as some kind of easier
way of you to approach it, for example, writing
A CLI command. So definitely it's worth giving a shot if you
want to know about it. For example, how to
secure EC two instance. So when you type a
question like this, you kind of get the
related articles which was given earlier. So, for example, securing
my EC two instance. Now, you can see that
there's one answers, and there is 444 views, and you can see that
this is asked by this person or that
email address. Sorry, that ID. Now now, it says, Hi, I'm
pretty new to AWS. I have full admin
access to Console. I have four EC two
instance DB Proxy, UI, and API, and it is asking you, how can I secure it. So now you can see the people
are answering over here and you can pretty much
see the ways of doing it. So the questions is over here. Now, if I'm going to do the same thing on Knowledge Center, now you can actually ask this as part of
the Knowledge Center. So when I search it again, over here at this C four
rather than all content, I go to Knowledge Center, you can actually
see some options which is available
from Knowledge Center. And one of this would be about securing connection to EC two. So you can see that this one. So you have questions
section where it's answered by people like us. There's Knowledge
Center, which AWS writes article about it and
different ways of doing that. And you can actually write
a command about this, for example,
whatever the command you want to write
about the post, AWS would be whenever they are available would be
answering to that, or the fellow people will
also come on over here. You can see people have commented over here as well.
So as you can see that. So pretty much, this is
the knowledge Center, and this is how you can use it. And yeah, so you
can actually get a real life thing also
because there are people coming in and
asking questions like, you know, some kind of a forum. So that's what the question
section is all about. Thank you again, guys,
for watching this video. In the next video, we will
talk about the second one, which is AW Security Center.
37. Labs AWS Security Center: Hello, guys. Welcome
back to the next. In this idea, we
are going to talk about AWS Security Center. Now, what we cannot
discuss about the Security Center
is that it is a central hub for AW security
related information, including compliance report, data protection guides,
and security bulletins. Now, how it helps is
basically provides you resources documentation
on data protection, incident reports, and
achieving compliance with industry standards such
as PCI, HIPAA, and GDPR. Now let's get to the hands on part of this particular center. Now, this is, again, not part
of your existing services. It's going to be outside of the, you know, services,
which is again, something you can
Google, or you can type awtotmason.com
slash SECURITY. Now, the name, the Security Center has
changed to AWS Cloud security, so that's basically what it is. Now, all the features which we spoke about is all the same, so you can see the strategic
security over here or identifying
prevention I'm sorry, identify, prevent, detect,
respond, remediate. So these are some of
the options which you have over here in terms
of working with them. Now, you can also see I'm sorry, about
the background noise. So you can see over here that you have multiple security
services over here, which talks about the different security free
Abs Cloud security trial, which you can click
on, which will actually get to the trial
version, but I don't know. URLs not working for some
reason, so ignore that. So you can actually see other options over here in terms of identifying
security compliance. I'm just trying to
open the URL again, but it seems like
it's not working. So you can see that you have the identity access
management as part of it. Amazon Cognito, Idnity
Center, directory service, other applications,
which is part of your security and identity compliance service
is all over here. Now, you can also see the
featured solution on AWS, and you can see the applications which we sometime
some applications are something which we have
already seen like WAF and how AWS automated responds for your security and
stuff like that. You can also see
the customers over here who has been
using AWS security. You can also see the
compliance over here, which talks about the
compliance audits over here. You can actually take
a course to understand about these compliance
requirements and you can also go about data protection over here to understand about
the data protection. You have also got a blog
over here which tells you how to work with
some of the uh, complicated items over here, as you can see that latest
blog is about implementing relation based access control with Amazon verified
permission and Neptune. So likewise, you have a
lot of articles which will actually help you step by step towards working
towards your goal. So now, when you
click on the article, you can actually see the
complete information, how it's going to help you
some architecture diagrams, as well as some of the
action item over here, which is going to be part
of that particular blog. So likewise, you have blogs, and you can actually
see the partners over here and you can also see
additional resources. So this is your security center. Now, I want to
cover one more item which is going to be the
security blog over here, which is the same
thing which we just saw just now about the
AWS security blog, which is a real world
situation where AWS employees would be writing the blogs towards solutions and the
architecture of that. You can also see the
related products, learning level of
these blocks and the post type has a
segregation over here. So we have covered
two items over here. One is the AWS Security
Center overview. We also covered AW
security blog as well. The questions would
be, you know, coming in over here in terms
of they'll be asking you, how do you collect the latest information
about the AW security? So this could be one of the
options which you will see. Security Center, security
blog is where you collect the latest information about security in
terms of compliance, data protection,
and then you have other resources as well like security best practices,
security bulletins. Likewise, you have all
those options over here on the AWS Cloud security, which is called a
Security Center. Thank you again for watching this video. I'll see
you in the next one.
38. Labs AWS WAF: Hello, guys. Welcome
back to the next video. In this video, we are
going to talk about BAF. So there's a service
called Amazon WAF, which is also called as WAF and it's a web
application firewall. Now, let's talk about the SQL injection attack before we understand about
the WAF firewall. Well, so now here the Mr.
Hacker here trying to, you know, inject an SQL, you know, query
into our website. Trying to manipulate the
server to give it the answer. Now, as you know that, the SQL is based on something called a
relational database. What is a relational database is that one thing is
related to another, and that's how data has been captured and data has
been released as well. Now, if you see this
particular group of table records over here, you can see that when a
customer register with a bank, he has to give his first name, last name, email address,
password, address, postcode, full address, then the bank account
number, right? So you're going to get a
temporary bank account number, which is again related to
your bank account number, your savings account, what's
the current amount you have? What are the pending
transaction? What's your credit card number? Then through the
credit card number, you have credit card number, um, what's available, credit use and any loan
on your credit card. Likewise, each transaction
over here will go to another branch which will relate to that
particular information. If someone wants to know about the details
of the first name, what is the last
name, email address, password and all, it will all be stored in the
database like this. So the password
should be somewhere, it needs to be
unencrypted, right? So for example, yeah, an encrypted password will
be stored in the end, right? So that's when it
actually compares the password and then sees that you are whether
authenticated or not. So what people try to do is what hacks trying
to do is that they inject the post method, and they will try to send
an inquiry to the database, which actually ends up
in database anyway. So you have application servers, web servers, load balancers,
and all the fire walls. But the end result will
actually go to the database. So they will inject an
XQ statement through a file and they'll try to
push it across your website. Now, that will be if your
website is vulnerable, it's going to come out
with the actual end result or your end password
or any kind of personal information is a win for them because
they will get to know some personal
information about that person who has
an account with bank. So not just SQL, there is XSS, A there's some other formats as well also going to
be used over here. Now, WAF is going to
protect your system. Now I have a detailed
architecture on WAF so this is going to
explain you much better. You're going to see the
demo of the hands on session on AWS on
this video itself. What I'm going to do is I'm
going to firstly explain this architecture how your WAF is going to be
dealing with this. Now, you would have
already understood AWS Shield standard so far and what WAF is going to give you additional
functionality over here. Now, uh, ABS Web is a web
application firewall that helps protect your web application from common web exploits
and vulnerability. And which could actually affect the availability
of your application, and also which could compromise your security more or less
by, you know, these people, these hackers, getting
the personal information, then trying to bargain
with you for a ransom. So that's the reason that, you know, your web
application should be secure. So AWS WAF gives you control over the traffic
reaches your application by allowing you to
configure rules that filters that filters or block
specific type of traffics. Now, in terms of
typical use case, which use case you
can actually use this VAF is protect against common attacks
like, you know, can block common web threads, like SQL injection, uh, cross site scripting,
like excess. And dedos attacks.
So Dedos attacks, you have additional protection called AW shield standards, so that will also protect
you against dedos attack, but WAF can also
do that as well. Now it can also give you
additional protection as well. As you know that AWS shield is not going to be charged and
it's enabled by default. So this is to protect
AWS infrastructure. That's the reason why the
shield is a standardized one. So we have anyway seen
the explanation for it. Um, so it also does something
called rate limiting. So when you use AWS WAF, it is used for
limiting the number of requests that a single IPddress can make to an application. So this is very
important because sometimes the diRs
attack or it could be any kind of escule
injection can come in from a same IPddress so um, OSPF is going to monitor that and it's make
sure that, you know, it sends a capture or
something like that to avoid people from browsing the same thing over
and over again. So if you go to any
websites nowadays, if you go to google.com
as BW and you try to search for a specific
topic like ten, 15 times, then it
will just immediately show you a capture asking
you to enter the capture. So what's happening is
that when people are accessing a website frequently within a course of 10
minutes or something, then the website is
aware that, okay, I've been constant pressure
from this IP address, so I'm going to go through
a firewall this time. I'm going to block
this and send it to a cap Shop page rather than
actually accessing it. So that kind of redirection happens when you are working with this
kind of a firewall. This is called Web
Application Firewall. We're going to talk about a lot of secularity related aspects. It's very much vital. You should understand
the naming of it and understand what this particular action item is
going to be about. I'm going to give
you a clue on this. So when you hear the word F, A W W AF. W mostly stands for
web in terms of your all your computer
related stuff, W mostly stands for web. So when you hear the WAF, the first thing you should
come to mind, could it be web? Next one, A majorly
stands for application. So wherever you see
A as a acronym, mostly it stands
for application. And then when you're talking about security and there
is a word called F, then it must be firewall. So try to relate it and
try to understand and try to get familiar with these
acronyms because um, you know, when AWS created these acronyms, they had this kind of mentality when they
created it because they wanted to make sure
that these acronyms are pretty much usable and reusable and
people can think about these things when it
comes to the short forms. So do you remember that's a
tip I'm going to give you for every single security
session so that you can understand and how
you can immediately, um, relate to this
particular segment. So over here, you have another use case over here I want to talk about
is the custom rules. As you can see over here, we have managed rules,
white list, blacklist. So what is the use of
custom rule is that you can create your own custom
rule and allow you to block specific pattern
of web traffic because there are other tools which we're going to talk later, and that tool can actually, you know, look at your logs, see the pattern, see the frequently coming from
IP address range. When you get to know
the IPRs range, you will know by the IPRs
looks of the public IPRs, you know which region
it's coming from. These days, it's
becoming really, really hard to identify because of all the VPN
services are getting so good. So, um, yeah, there is
a lot of complications and it's going to be more
challenging going forward. And, yeah, there is always a way to identify and, you know, sniff out the error or any kind of issue before
anything happens. But still, the technology
is improving so much. So there is custom
rules over here which you can allow or deny, from a specific
geographical location for, you know, by looking at your HTTP header
information also. So you can look at the HTTP header information
and say, Okay, this kind of header information
is not part of my um, um, my programming, so
you can actually, uh Deny those header
information as well. So this will stop people
from executing any kind of SQUL injection, likewise. In terms of integration, there is this beautiful
integration with, um, you know, other
services like Cloud friend. You can also integrate it with application level
load balancing. So that's something which I've already shown you over here. So you can put it up
across CloudFront. CloudFrind is going to be
a cache service for us, but it's going to
cache our information, and it's just going
to be a front of us. The easiest way to identify this is basically by the
name of it, CloudFront. So it's going to be
in the front side of your Cloud and it's going to cache all the information and send the cached
information across. So Route 53 is where your DNS is and CloudFront is where it's going to
cache the information. AWS PAF is going to be between the routing
and the caching. So that, you know, even
the cached information will not go to the
frequently accessed ones. So that's the reason why it's
right after the Route 53, so that even CloudFront will not be utilized
when there is attack. All right. So now let's go
to the real time example. So one of the real time
example I have faced myself when I was working for a Ecommerce giant back in India. So I used to host for
Amazon CloudFront. I used to have this
services hosted on that. And it is kind of vulnerable for SQL injection
and cross scripting. So what I did was like, by enabling this WAF service
web application FR WOL, you can actually, um, you know, look at the logs and see
which IP is frequently, you know, accessing our website. Uh, and we can actually,
you know, block them. Okay? We can actually
block them as part of this lack listing. So I have done that, and
that actually has kept away those series of iPad address from
accessing our website. And we got to know by the
pages they access, right? We got to know whether
they are going to actually buy that IP address
actually goes to checkout, or it just wants to be in the
home page browsing around. There is we do this
window shopping, right? So likewise, some IP
address does the um, goes around the products and then randomly select products. And then it will just keep on refreshing multiple
products where, you know, every time you
refresh your product, it actually has to get the information
from the database because of product pricing, product availability, all this needs to
be checked, right? So, um, it means that extra pressure on the database to get
you those information. So by this measure
of blacklisting, those IPAR Rs, right, I was able to, you know, save a lot of resourcing on
my um, you know, database, which actually resulted in lesser bill because more
the resource you spend, the higher the amount you're
going to pay for AWS. So that's the reason why
because there is transfer fees, there is Internet fees, and all those things are coming into pictures and likewise, you can actually save the
billing if you um actually, you know, have a lesser
performance on your application. But meaningful performance
is what I'm asking for. That's all I'm asking for. So here, let's talk
about key benefits. Uh, it's easy to deploy. It's cost effective,
it's high customizable. So which means that when I talk about highly
customizable, I can create my own
rules and I can actually have my white list
and black list of IP address. Cost effective. You
pay only for what you use based on the number of
web application requests, rules applied, so that's
going to cost you that much. And then easy to deploy as you can just deploy
it very easily. I'll show you on the hands on. So let's get to the hands
on of this particular item, so I'm sorry, I just opened
something else over here. So let's just type
WAF AF and shield. So AF comes with shield. So we have seen this on the
topic of shield as well. So the shield and VF, you know, shares the same
service um overlay. So you will have the AF on the top and shield
on the bottom. So yeah, so you can see that
getting started with WAF. So you can actually new AWS
Wabashboard is available, so that's a new one. And you can also have the web ACL access
control list page. Okay, so that's
pretty much thing. Now you can customize
your capture behavior by implementing capture APIs. So Java API is used
in capture now, which means that um,
as I told you, right? So if a frequent number of people from the same IP
address will be in access, you can actually
read it to capture. So that is also part of it. So you can see
that in WAF right, you have web ACR, board
control dashboard. Oh, these are some of
the features of it. I think we missed
the bad boots. Yeah. So here you have HTTP floods, canners and probs, IP reputation
list, and then bad bots. So these are some of
the features of it, and that's what you are actually getting
displayed over here. So board control dashboards, and then you have application
integration IP sets for you to create a rule
over here about the IP sets. And then you can create a list
of white p sets over here. So it doesn't matter whether
you block it or allow it, but you just create it with the name of Black
list or white list, and then you can
put it as the rule. So in the rule, you
can actually create a rule group and
you can specify, you know, the what it needs
to do for this group. So you just put in the
group details and then set the rules and
capability and rule priority and then review
and create the rule. Again, we're not going to get deep into the configuration
because this training or this certification is not
about you going inside each component and doing stuff that comes in the
professional level, so don't worry about it. So this is a foundation level, so they won't be asking
you how to create a rule. What are the options when it
comes to creating a rule? They wouldn't be asking
you those things. Okay, so AWS marketplace,
manage rules. So these are some of the marketplace which
is frequently used, uh, you can see that
anonymous IB protection. So this is from
Cloud Brick and this is managed rule
by um, AWS staff. So this basically gives you anonymous ID
protection providing integrated security against
anonymous IP organizing from various sources of VPNs, data centers, DNS proxies, Troy Network, relays
and PNP network. So if you just click on it, it's going to go to
the marketplace, and it's going to show you
the product detail over here. So what's this um you know, marketplace rule
about and can I use this rule for myself to make
my organization better? Because this is applied to
the organization level, which means that
your entire VPCs will actually get the
using it, the usage of it. But this involves pricing. I can see the charges over here. Usage cost charge per
month for each region. Pro rated by Hours
is $35 per region. Um here you can see
that charge per million request in each
availability zone is 0.2. Okay, so that's pretty much
it's a non refundable. It's from vendor. So what
this is all about is, it gives you, um, the list of threats
which is already detected and already
been flagged. So you don't really
have to go from the scratch of looking
for IP address which is penetrating your network may not be something if you're
not like a big company, you really don't need
this and you can do it by monitoring it
and looking at it, you don't have to pay this
much amount per month. If you think that your
company is not that big and you don't not like having
that much of a threat. But if you are working in
a bigger organization, you can suggest for
something like this, just $35 nothing for
the big organization. But this gives you
protection over there. So you have APA protection, APA security rules,
board protection set. So these are something
which you can buy and, you know, pay monthly
with your bill. And what this going to do is it's going to add additional, you know, security,
additional rules on your WAF, and it's going to make
sure that it, you know, it does some kind
of a control in terms of controlling
your bad boards, your, you know,
unknown IP addresses, anonymous IP addresses,
and stuff like that. So that's completely up to
you if you want to use it. All right? So now
this is a new URL. The old classic one is gone. So you can switch it to that, but I would prefer start
using the new one because, you know, it's going to
be there for a while now. So because the new ones is always going to hang out for
like two or three years, at least, so you're
going to have that, um, you know, practiced on that. So you have so many
categories over here, web access control,
IP sets, rule groups, and marketplace rules for faster and easy access of an existing rule to be
imported in your case. And these are all, you know, approved and vetted
by AWS itself. So that's the ones
on the marketplace. Thank you again for
watching this video you understood what is WAF and you understood the architecture. If you have any questions, leave it on the
question section. Your review really matters. If you have a suggestion, you can always make
it on the review. You can reach out
to me on messages. You can join my
community and reach out through me from the
community as well. There's so many ways
to reach out to me and make this course
better. It's all up to you. Thank you again for your
patience. I'll see you on the
39. Labs Network ACL: I guys will come back
to the next DF and just PDO we talk about network
access control list. Now, ACLs are one of
the important topics in your AWS caia. Now what we can understand about is a little bit of theory, as well as a little bit
of hands on on ACL. Now, to understand
network ACL first, you need to understand
the acronym for the same. ACL is called access
control list. So this is even in a higher level than what we have discussed
on the security group. Now, as you can see in
this architecture diagram, please ignore all
the typos in here because this is a
AI generated image. So um, there's not much
of a typo over here. So, um, I mean, even if the typos there, please ignore it
just because of the, um, the AI copyrights
and stuff like that. So as you can see here, so the security group in this hierarchy is very
close to ECT instance, but not in a higher level. But when you talk about
the ACL, the network ACL, it is at a higher level
as in it's towards the VPC virtual private
cloud where in which it is much at a higher
level near the subnet. We have already understood what basically a VPC is previously, but do not worry, we will
go in depth later point of time when we're talking about
VPC in much more detail. But as of now, what you
need to understand is that the VPC is the entry
point of any kind of communication and the VPC is on top of all the
AW services now you can see that all the AW services is surrounded and VPC
at the top of it. Now, what you will be doing at the VPC level is configuring
the network ACLs. Now, do remember that ACL
is a stateless firewall. Like when you compare this
with your security group. Now, security group is
a stateful firewall, where where we have seen some of the activities like when
you allow traffic inside, it will automatically allow the same traffic
outside as well because it's stateful
because it monitors the entire traffic flow. Okay? But when you compare the same thing with
the network ACL, it's like a stateless, um, method of firewall in which it doesn't
monitor the connection. So it is vital for you to
specify the inbound connection. And also you need to specify the outbound
connection as well. So if you don't specify
the outbound connection, as this is not monitoring the enter transaction
or the session, it will not allow the
connections to go outside. So if you are allowing
the connections through the security group and you are not allowing it through ACL, your network would
eventually not work. You will not get a
response from the server, even though you are able to send the request to the server, you will not be getting
response because the network ACL doesn't
allow you to um, exit the information
from the server because it is configured
at the top level, okay? Because it is configured
under the VPC. Okay? So you can access
that using network ACL, and you will see that it
goes to the VPC and it's one of the features of
VPC is your network ACL. So when you go into
your VPC dashboard, you will see on the
security level, you will see security group
and you will see network ACL. Again, security group comes
into this level as well, but then it's directly
getting assigned to your EC two instances, which again, reports
to VPC only, but it is much more of a stateful session and
network ACLs are stateless. Right now, you have two network ACLs
configured over here. One is for the X ray
sample application, which you can ignore. The other one is
the default one, which comes by
very much default. Now this has six subnts
attached, so previously, we have seen subnets are
basically the network which is configured for each of
your available T zones, which is part of this region. So I'm in Northern Virginia. I have six regions I'm sorry. I have six availabt zones in
Northern Virginia region, so I have six subnets
assigned by default. Now like this extra application, you can also specify
the subnet where you want to host your
content, and likewise, you can be very
specific on that, but we wouldn't recommend that because you can host any of the EC two instance
across any of the six available t zones. So all your items should be part of one network ACL,
that should be part of it. This is a very
specific application and please do not take
any examples from this. Please use the example which is configured
and this would be definitely there for you as well when you look
at your network ACL. Because without this,
if you delete this, your network traffic
will not work. Even your SS such you
will not be able to use. Why? Because this actually
allows those communication, even though it is allowed at
the security group level, network ACL should be enabled for your
communication to work. Now when you look at this
particular network ACL, you will have the
rule over here, which is allow and deny. As this is stateless, you need to specify
what you need to allow, you need to specify
what you need to deny. The network ACL is a stateless firewall that
operates at the subnet level, controlling inbound
and outbound traffic to and from the
subnets in a VPC. Now remember, when you
apply a traffic when you apply inbound rule
or outbound rule, and then you you actually say that I want to associate with all the subnets
which is available, which means all the
able availability zones under that region. So which means that all ciders
are coming in over here, just assigned for each of
this availability zone, which means that these
inbound and outbound rules will be applicable
for all those things. But by default, inbound rule will be the first one
which is over here, which is a and second one which is mentioned
as star ST Nine. So what this means is
that the one which has the higher number
will have more priority, as you can see that 100 is
higher than star, right? So it would take this
particular rule and allow all traffic inside
from all sources. And in terms of outbound rule, the same thing over here. It allows all traffic inside, which has a higher number, which is the ranking system, so it will allow all
the traffic inside. So if you edit this
and you change it to this one to star
and this one to 100, then it will deny all the
request which is outgoing. And it will have this allow, but then the deny will go first. Now you can see that I
made two rules over here. I've given 100 as in to deny
and 101 as in to allow. Now if you see that by default, you can see which
one it picks first. I will obviously deny
all the request, and it will allow
all these requests, but the problem is like
it's coming right next. So it will basically
deny everything which is coming in because it will take in a ranking order. If you have another
rank over here, which actually add another
rule and you just put us one and you say all
traffic and you say allow. You can see that when
you say and quit, if you go outbound rule, you can see that rule number one goes there and it will
allow all the requests. Which means that it
is a ranked base. So you're going to put
another rank in front of it. There's no other
rank before one. One is the starting of the rank, which means that this is the first rank it's
going to be allowing. Here we are talking
about all traffic. You can also customize
your traffic according to what is your requirement and
do it according to that. Let's just roll back this. You just have to
remove this and this and then just make
this hundred allow. And save changes
and that should be your roll back item over
here in the outbound rule. Perfect. Now this works
as the default one. This star is not something
which you would configure, it would come in as default. So if you don't have any
of this rule mentioned, it will go with the default
one, which is deny. But remember, we need to specify deny when you are denying connections
to your server. That's what it's
implying right now. It's implying that you need to deny it, we need to low it. It's not like by default, it gets denied like our
security group, security group. If you don't allow anything, it will be default denied, but not on ACL. That's why you have this uncommenable or
unremovable section over here which says, deny star. This is a kind of a helper. If you remove this rule, it will deny all the
connections coming in. Which means that none of your services like
Lambda service, whatever service which you're hosting as part of this region, you will not be able to
access it from the Internet. Because you're denying
everything at the VPC level. Okay. Unless you
have another VPC configured and you have that other VPC pointing
to another ACL. So you can see that this
is another VPC name. You can see that. So that's a completely
different VPC. So that applies on
a different rule, and that really is advanced right now, so don't
worry about it. Thank you again for watching this video. I'll see
you in the next one.
40. Labs Security Group Part 1: Hey, guys. Welcome back
to the next video. In this video, we need to talk about the topic
which is going to be describing AWS security
features and services. For example, we can look
at three of the um, you know, services or
features in your AWS. Now, one is firstly, we're going to concentrate on security group and
then we're going to do ACL and F. Firstly, we are going to understand about the security
group over here. Now what is security group? We are actually configuring this security group as a firewall feature
for EC two instance, which actually allows
inbound traffic from the Internet
and outbound traffic from the server to the Internet. So there are two ways traffic, but there's always
two way traffic which is inbound traffic
and outbound traffic. So if you take a
road for an example, a road is normally a road is
descend as a two way road. But sometimes a road
can be one way as well. So we don't normally
say two way road. And that we don't have
the habit of saying it's a two way road because
a road will be um, you know, designed for two ways. So there will be one lane
for the ongoing traffic, another lane for your
incoming traffic. So when you decide that
road should be one way, then we specifically say
that this is a one way road, saying that there will be either ongoing traffic
or incoming traffic. Okay? So likewise,
when you create EC two instances you kind of have this security group
configured by default, allowing your SSH connection, you have the secured shell. So that connection to be
allowed from anywhere. So this is something
we have seen when we created an ECT
instance, right? So this is an inbound traffic. So anyone around the world can actually connect with
your server using SSH. That's what we are saying.
So let's just look at, you know, the hands on session. I'm not sure if it's
got logged off. Yeah. Just give me a second. All right, I've
logged into a server. So if I go to EC two
Instance right there, so I should be able
to see this network and security inside that you have this security
group over here. Now, when you click
on security group, you kind of see multiple
security groups over here. So we have launched EC
two Instance, right? So if you see that it is
see the running ones, I should see the T two micro. This instance would have
a security group on it. So if I just go to security, I'm going to have a
security group over here, so you can see that this is the security group
name over here and you can see that the
security group is assigned over here for
inbound and outbound traffic. Now, I don't think this is the one which we use to create. Not sure. I think it could be any of the
instance doesn't matter. So I'm going to show you
default one which is over here, which is created as default. And this allows all traffic
as an inbound rule, and this is outbound
all traffic. This is a classical example
of a default security group, and security group will
always start with the word SG Ipind unique ID of group
ID of your security group. I want to show you a complex one as well in terms
of security group. By default, the security
group denies all traffic. So do remember that. You have to enable traffic based
on the requirement. So when you talk
about security group, we talk about one
of the complex ones which we configure for
our ubernty service. Again, this is not
part of your exam. This is just to ensure that when you
go to the next level, it's much easier for because rather than just learning
for the certification, it is always good to
learn for the future as well because I
really like that. So we have something
called control plane and Cuberts and
compute plane uberts. I actually can see that I have instances configured
for the Kubernetes, and I want communication
between the ubernts cluster. Also, the Cubont
axis from my system, as well as the console axis for the Cubernt from my system. I need to enable all
these stuff, right? So as I told you, by default, everything
is disabled. You need to open up
the port numbers. It's like security groups is more or less like a firewall, which happens within
your instances. So if you just look at the security group configuration
for the computer plane, uncontrolled plane, you will see a specific configuration
which have done. Let's go one by one. Let's talk about the compute plane
because that's sorry, let's talk about
the control plane, which actually is
the master node. When you look at this, there are eight permission
entries over here, which is configured as part
of this control plane. Now when you talk
about inbound rules, now inbound rules talks
about the incoming request to the server which this control plane security
group is assigned to. If I have assigned
this instance, let me open it on another tab. I'm assigning this
particular control um, you know, control
group, security group, and if you go to security, you can see that I have assigned the control group for my
co guarantees controller, which is the Master server. In this situation,
I'm allowing or accepting requests
coming into the server. One is the port number 22. This is mandatory, or else I
will not be able to do SSSch from my Windows system to the AW server and
access that server. So here you can see
the IP range here. So this tells you that anyone, this means all IP ranges as a source is accepting connection to this
particular security group, which is listed over here, just port number 22. So very much important that um, it is not just about
the IP address, but it is also the
port range over here. Which is pretty much
you need to put across. Sometimes when you choose default services
like SSSH, okay? You don't have to give port
number because by default, SSSH would look up on 22. So there is this 02024, which actually has a protocol
assigned to a port number. So we will call
this as a protocol, and we will call this
as a port number. So we will assign this
and we'll put a IP across saying that this is
the IP address range I want to accept. Okay. Now, let's talk about
these ones over here. First, let's finish all the all anywhere, and then
we'll come there. Now I have two more items
over here, just 6443. 6443 is your cluster
communication port, and that is also
listening to everywhere. 30,000 to 32,000 is
a node port numbers. So when Cbonnts
runs on system and you want to browse
the application which is running on
Cubone ties node, we will expose them
to the Internet using this range of port numbers
called 30,000 to 32,007 67. This is called the NodePort. When we use this kind of a port, I want to see or browse the application running on this particular Kubernetes note. For that, for me to browse it from my
desktop from my laptop, I need to have this port
number exposed so that I can actually browse the service which is running on Kubernetes. I will have to expose this
on the note control plane, as well as the compute
plane as well because my worker notes will be actually running
on compute plane. That the slave notes
which actually runs my Cubertpot will be running on this one. I
need to expose that. But sometimes around, you don't have to expose it to
the whole world, right? Like for example, over here. So what this means is
that this IP address is tightly framed
within the AWS network. Anything within the AWS
network because AWS has this IP address range assigned to all the
internal servers. So here what I'm doing is I'm giving some custom
port numbers over here, like ICMP which will ping other servers
between each other. And again, you need to
enable ICMP or else you will not be able to ping it though it would be reachable, but the pin will not work. Okay, because ICMP is disabled. So likewise, you have to enable such port
numbers like for example, here, I'm enabling UDP
port numbers over here, and this I'm saying, it should be within
the AWS network by giving the local IP address, the private IP
address for the AWS. So what this means is that you can configure it
for the whole world. You can configure it
for the local network, which is happening
with your subnet. You can also configure
it for any kind of range or source
of IP address. You can actually
configure it for your laptop IP address
because when you click on um Nu clkonEdit item, and you try to add
a rule over here. I will tell you. So if you
just select SSH, right, it will tell you over here, myiPa address will
also come over here. So if you just look for it. Um, because my IP, you can see that my IP. So this is my IP address which I'm currently
using on the system, and you can see that
IPdress is given over here. But unfortunately, I have
a dynamic IP address, which means that
every time I restart my modem or restart the system, or connect to VPN, my IP address would change. So it's not really
recommended to give IP address because it's a public IP address and
anything can change. So you just have to
keep on modifying. That's the reason why we zero dot zero dot zero
dot slash zero. So it means anywhere. So that
basically means anywhere. So you will have an option like this and you select that and basically means
anywhere IP version four or anywhere IP version six. So, likewise, you choose this. I hope this extra
information was helpful for you to understand a little more in
detail about this. If you have any
questions, leave it on the questions section. I'll
see you on the next one.
41. Labs Security Group Part 2: Hi, guys, welcome back
to the next video. So here we are continuing on the security group over here. So what we have done previously
is to allow a connection. Now, naturally the
question would be about if you want to
deny a connection, what are you going
to do about it? So if you are planning
to deny connection, you just need to make your
allow very much a strict rule. So for example, here I
am adding a custom one, and here I'm just saying
that I need a protocol, which is custom
protocol as well. So just say TCP protocol. And then I can I'm sorry, let's say TCP protocol. And here I'm giving a port number called
2525 port number. So I want to deny connections to a specific group of
people and I want to enable connection to
specific group of people. So as I told you earlier, you can actually do that as
part of your security group. There is no explicit
way of denying people. There is no option of you explicitly denying people
from accessing your service. But what you can do is you
can specify who can access it rest of them will
be denied by default. So you don't really
have to mention a denying list over
here because if those people are not part of your particular IP
address list over here, they will be denied by default. So there may be a
tricky question. So is it possible
for you to deny certain users as part
of your security group? Even though there is
no specific option like deny in security group, but by default, it
is deny, so yes. You can actually only specify
whoever is likely to access your this particular
port number to your instances which has the security group
applied on top of it. So the rest of the people
will be naturally denied. So this is the answer. Okay? So this is how they may ask you in
terms of question. So you define a rule which
allows or denies specific IP address protocols
port ranges, okay? Security groups are stateful, meaning the traffic
is allowed in, the corresponding
outbound traffic is automatically allowed. So that's what
stateful is basically. So what this means, let me rephrase
this one more time. If you are allowing
a traffic inside, that will actually affect
your outbound as well, and it will basically tune your outbound to allow
the outbound naturally to allow that traffic outside to the specific IP
address or series of IP address which
you have specified. By default, outbound rules
are going to be anywhere. The server can
communicate to anywhere. They can see that the
default outbound rule is allow connections anywhere. The connection
going outside from your server will be
by default anywhere. But even if you disable this, the inbound rules will actually get support
from outbound rules. So it is quite natural
that you don't really have to specify it because
this is stateful. Okay. So in a typical use case, in which scenarios
you can use use this kind of security
group is if you are configuring a
security group to restrict only to a
certain IP address, like example, your
company VPN address, it allows only traffic
like HTTP and HTTPS, and you can be also
specific about that. So you can only allow
HTTP and HTTPS. That is also something
which you can do over here. So when you do add
rule over here, you can also has a search
for HTTP. And HTTPS. You have to give two
rules over here. So one for HTTP, 15 HTTPS. By default, you can see the port numbers are
chosen over here. So you cannot actually modify these port numbers and
then allow from anywhere. You can actually have these only configured for allow
anywhere and rest of, you know, all TCP, and you can put your company's, you know, VPN IPAddrss range. So your network
administrator would have the IP address range
of your company, and you can actually put that
IP address range over here. So anyone from your company
can actually access this particular service
security group, sorry. So this is pretty much I
want to cover on this video. Thank you again
for watching this. If you have any questions, leave it on the question section.
42. Labs Thrid Party Security Products: A guy spocme back
to the next PDO. In this PDO, we are going
to talk about something called understanding the third
party security products, which are available as
part of AWS marketplace. Now we're going to understand the different types
of availability or the third party security
products which can be used along with your AWS security,
the inbuilt security. Now, why we are going to
use third party third party enhances the security of your existing AWS security
which is already in place. Now you can use it for, um, monitoring extra in terms of prevention of network
application data, as well as identity
compliance and third party threat
detection and response. This comes with years
of experience where the third party provider
has actually worked with different kind products
and their products will be showcased as part of the third party security
on AWS marketplace. We will be talking
about the demo in the final part of this video, just a quick demo on where to find the third party
security products in AWS marketplace. I'll show that around to you. But then more or
less, we're going to talk about this each concept, and we are going to talk
about the products, more or less like some examples, also, I will be giving you that. Let's talk about the
network security. So as you know that we have several security concepts
in terms of your AWS, which comes by default. On top of that, you can also buy AW security products in Amazon marketplace like
products like firewalls, intrusion detection systems, intrusion prevention systems to monitor and control
your network traffic. Now this will give you somewhat an experienced overview on the products which from the manufacturers
like Checkpoint, um, you have Palo to networks. Likewise, you have a lot of, um, you know, uh, softwares which will
help you for IDS and IPS in terms of intrusion detection system and protection. So likewise, you have other
network security products. Also, we will check
that on the demo, which you're going to
be part of this video. The next one is about
application security. Tools that protects application from vulnerabilities such as web application firewall VAF
which is already part of it, DDos protection, one time
application security. Most of it is already
there as part of the AWS, but still there's the option of enabling third party
security products as well. Like for example,
Trend Micro Deep security as another product, signal sins VAF that's another third party product
and then you can use FI big IP Advanced WAF Espo. I combines the
years of experience which these product owner has, and then they put together a
product for you to use it. Again, the licensing and the cost would be separate
and you may need to pay a little extra for that
because you'll be directly buying this
product from the provider. And for activating
these products, you need to have the root
user because whatever the activation of a product in a marketplace or
subscription enabling, it needs to be done
from a root user. Now let's talk about data
protection products. Now, the solutions for
encryption, data loss prevention, key management, and
security storage is already part of
your AW services. We have discussed some of
those in our training. And you can also go for an external third party
security provider as well in this situation where you can configure that as part of AWS. And these are some examples
I can give you about semantic data loss prevention,
Talus cyber Trust. So that's another company. McCafe data loss prevention. Likewise, you have companies
which can actually give you data protection apart from what AWS already has towards it. In terms of identity
and access management, we have single sign of
multifactor authentication, privileged access
management, likewise. You can also use the third
party providers like OCTA, ping identity, likewise, you can use the third
party software and then their provider product
in association with what you have in terms of your existing identity
access management in AOBS. In terms of compilings
and governance over here. So tools that helps manage compilings requirements
like GDPR, HIPAA, SOC two, and more, which enables you to work with the or help with
manage information. You can also work
with plunk Cloud, audit board, quality
compliance suits. Likewise, the third party
products is also available, which you can pick it
up from AWS market. Now how this is going to work, that's going to be
up to the products. So each product will have
a different way of working and different way of going around it with their
options and stuff, which changes for each product, which would also quickly touch
paste on the demo session. The last one which is going to be we're looking at as part of the third party
security product on a whole scale is your threat
detection and response. Now, security products
for detecting and responding to potential threats
like anti virus software, endpoint detection and
response and SIEM solutions, which enables more
advanced security in terms of detecting a threat and
responding to a threat, both on the server level as
well as the service level on W. Now you can use um, third party products
like CrowdStrike Falcon. You can use Splunk
Enterprise Security. Likewise, you have multiple products which you
can actually use with and direct the threat and have a response
to the threat. Now in terms of benefits of using a third party
security products is that you have a quick
deployment solution which is preconfigured
optimized for ABS, because these products
are designed for AWS, which is on marketplace, and it is trusted
and approved by AWS. That's the reason
why it's there. Then you have seamless
integration with your existing products
on AWS like Cloud Trial, Cloud watch, guard duty, and security hub so that you can actually have a
overview of all the, third party products
from the security hub, which will give you a
overall description on it. Then you have the model
as pay as you go. As means that there are
some products which comes with a 30 days trial pre period as well,
I'll show you that. And it will also, where you have to use the
monthly hourly plan, or you can even go
for AVO subscription based on the product
you're selecting. You will have a vendor
support as well. So for this third
party products, so you can reach
out to them through email or their
support chat channel. So basically, they offer
a comprehensive support for uh the products they are selling it to you because you
are paying for the product, which includes their
support as well. This is pretty much I
want to talk about it. Let's just do some
marketplace over here. So AWS marketplace is where you can find
buy Deploy software. So now over here, I have
managed subscription over here. I don't have any
subscription as you can see. Now I can go for Discover
products and I can discover products over here and then look for different types of
products over here. So you have the
catalog over here, which talks about different
types of products availlab over here and you know the publishers and the
delivery method over here. You have the pricing models. You have some software
which is free, which means that it is given for a demo purpose or
something like that. So, you start as free and
then you eventually get paid. Then some are usage based, and you can see the pricing
unit based on users, host, custom units, tires, and then you have the vendor inside the security
profiles over here. Then in terms of certificates, you have the certificates
over here where it is a compliance certificate or it could be about the
specialities on AWS, for example, security
speciality over here, which actually shows you the security related stuff
which we just spoke about. You see here is the
trend Cloud one, which is basically the CN APP capable of protecting throughout the entire cloud environment. This is a security which is integrated with
D Wop tools as well. You have another type of
security platform over here called Cloud
Infrastructure Security, which correlates a vast number of security signals to trace real infiltration in terms of attackers coming in or
breaking into your system. You have Datadog Pro as well. SAS based unified observability
and security platform is full with full
visibility and health. You have ACA security as well. Likewise, you have so
many security features available over here. Palo Alto, so this is something which we
discussed earlier. So you can see
that it comes with 30 days free trial to Paige, and you can see that this is a cloud next
generation firewall, which is best in class
network security. Which is using
artificial intelligence and machine learning to stopping zero data exploit
and all those things. It has got one review
from AW new user, I guess, not happy
customer, I guess. Likewise, you have security
stuff, which talks about it. This is how you can check
it out and you can also see some managed security service
specializations over here. There are four of this one
over here, the Deep Watch. So likewise, you have so
many interesting security, third party applications. But if you would ask me, should I really go for a third party? It completely depends
on your application. If you're not happy with default security which
has been given by AWS, you can always go
for a third party one which will additionally give you more security aspects and benefits when you
are going for them. Thank you again for
watching this video. I hope that it was helpful
for you to understand. So if you have any questions from this, you know
how to answer that. Mostly the questions
would be all about. They'll be asking you about the third party
security products, does, I want to have this
extra bit of security? Is it possible for AWS to
integrate a third party vendor. So they may ask you
something like O V of it. They wouldn't go in depth
about each of this, but they may ask you they will not ask you about
the examples though, but they will be
asking you about, um, you know, a specific service and say, I want a custom, um, you know, custom third
party security product, something again enable. Likewise, you will get questions towards this particular topic. Thank you again for
watching this video. I'll see you in the next.
43. Labs Trusted Advisor: Hey, guys, welcome back
to the next video. In this video, we
are going to talk about AWS trust advisor. Now, it's important that you should understand a
little bit about it. A real time trust advisor
service that helps you optimize AWS environment by providing you security, cost performance
recommendation. Trusted advisor is going to give you all this kind of real time, um, recommendations
towards your services which you're using currently. Now, if you understand
how it works, the monitor it monitors your AWS environment and provides guidance on
improving security, including recommendations of enabling multifactor
authentication, security groups,
and IAM policies. It's like a tool which
is used as a service. So you can see in this
architecture itself, the rusty trusted
advisor here will access all your services and gives you a overall picture of your
services and gives you um, the suggestion for that
or recommendation for it. So it is a service, so you just have to go and
search for trusted advisor. Now this should come in as a service as you
can see over there. So I've already opened
it for your convenience. Now, in terms of recommendation, you can see that trusted
recommendation is that it is recommending
zero actions for you. It has investigated zero, and it is basically
excluded, not, you know, included
excluded I mean, I've not even done any checks. So now you can actually go
ahead and download all checks, which will bring down all the
checks which is required. Now, you can see that
upgrade your AW support plan to get all the trusted
advisor check. So currently, I'm
on the free plan, so currently there is nothing
of a check to be done. And to do that, I need
to upgrade the plan to a level where there
is a trusted advisor. So now, this is for
trusted advisor. You can see that
the basic plan I'm on trusted advice
recommendation, only service codes and
course security check. But when you go for a developer or a business or enterprise, you have a full set of checks and only services more or less same for basic
developer over here. Now that I've downloaded
all the checks, but when I refresh the checks, let's see if any
of this comes up. Currently, there's no action. Let's just click on Info. I don't see anything over here. Let's just talk about
cost performance. Now here, cost performance is one of those recommendations. Now, again, you see that
this requires upgrade, so it's not available for us. So what it does is it checks
your using capability of your cost for each of the services and
advises you based on that. So trusted advisor might
recommend you to delete unused idle resources or reserve capacity in case it detects that it's not
being used effectively. In terms of performance
check, again, same thing. It's going to check
the performance of your application like
provision throughput, monitor for overutilized
EC two instances, underutilized instances as far and it's just going
to give you that I think cost optimizer will give you the unutilized resources, and performance will give you the overutilized
resources saying you to increase the memory, CPU or the hard disk space. Um, in terms of
security over here, so now this is something
which is available or us. Now you can see that this
is something which is free. So now here, security group, it is coming up with an
exception over here. So it is saying like, 12 of 27 security group allows unrestricted actors to
specific port numbers. So that's a security risk, and it actually gives
you that information. So for example, 80, 25, 443 and 465. So this is something which
is green, which is okay. And here's the red one which says security
group is attached to the resources provided
unrestricted access for ports 2021 22, which is your SSH port
number, I guess, this one is. And there are other port
numbers which we have enabled. It has addressed this
and it has given us, that this is a red
security alert that this may be a problem. Now it is also giving you the a list of regions over here where these port numbers are enabled now. This is weird. I've
never used AP South. Now you can see that
it has given me this particular security group has this 22 open on this one. This is a security
risk over here for me because I've never
used this particular region. Now you can see that other
security groups over here, which comes up and it says
you should you can give 22, but then it wants you to
be specific on the IPRs, but then it becomes
chaotic for me to get the IPRs from my system because I'm not using from
an organization. If I'm using from
organization that would have a specific IPRs because I'll be joining to the organization. Using a VPN or something. Then from there, I'll be
accessing the AWS server. But then I don't do that, so it becomes chaotic
for me if I had to give my hard coded IP address
every single time. So that's something it's
not a great thing for me. Now again, there are other things it also
checks like EBS, RDS, S three buckets, MFA, which is multifactor
authentication on root user. It's there. There are other
things which really requires an upgrade over here for you to verify these action
items over here, there's a lot of checks it does. In terms of fault
tolerance, again, I don't think that's available
for us so you have that. There. Now there are the service limits over here like auto scaling
launch configuration, we're not done any auto scaling. We're not done any
dynamo DV over here. So this is all checking
your service limits, whether you are reaching the quota which is assigned for it. Now, quota is basically a specific amount
of threshold you would be assigning for
each of the services. By default, it is 80 percentage. So are you using more than that, that it is checking. And one thing is out of it is
Lambda code storage usage. So that's kind of not
included in our plan. In terms of operational
excellence, this checks sees to
recommendation to improve the operational
readiness of your AWS resources. So that basically gives you that these are
one of the resource. Again, it's not part of
our particular plan. All right. So this
is pretty much your trusted advisor over here. Now you can see that we have
enabled the trusted advisor. You can also configure
notification over here I think it's using
SNS for notification. So you can add
contacts over here, and then basically the
recipient will get alert of any of the
trusted advisor. That's pretty much what I
want to cover on this video. Thank you, guys, for
your time and patience. Now, you can see that the
recommendation has come into picture when we just randomly
click around each of this, and you can see this
one recommendation about our security
group over here. Thank you again
for your patience and time. I will see
you on the next.
44. Labs AWS CLI Operation with multiple commands: Hey, guys, we'll come
back to the next video. And this video, we are going
to talk about user creation, for you to access from any of
your CLI mode of your AWS, you need to access
a user to do that. I will tell you about the
command which you need to use to get to
access the user, but firstly, we need
to create a user. And give him the access key. Now, Access Key is how
you're going to access from your CLI to your AWS service
which is running over here. So for that, you need
to create IAM user, so just open IAM over here. I already have a user
named Dev over here, which I'm going to I will
not be using this user. Now, there are two options here, you can create a user, and then you can, you know, create a access
key for that user, and you can provide
that to the AWCLI. But if there is a requirement
that you need to give um the access key
of your root user. So in that situation, when you are doing some kind of management oriented script where you need the root
users access key, you can also do it, but
it is not recommended in highly advised not to be
using root access key, but still for occasions like
when you want to do a lot of command line
scripting and you want to do organization
additions of accounts, and you want to do some kind of user related activities
or something beyond what root
user only can do. In those situations,
you may have to create access key
for your root account. In that situation,
you can go to your username and middle click
your security credentials. So once your security
credential loads, you're going to have
an option there. Now using this option, you can actually create the access key by
clicking over here. Now, this option would come right inside your
root user account. That's why you are
going over here and clicking on
security credentials. Now, do remember,
I've logged into my root account and that's when I have to use this option. If you log in to
a normal account, then you will not be able to generate the access
key for root user. For that, firstly, you need to log in to the root user account, and then you need to go to your account name and then go to security
credentials from there. So once you click on this, the process is the same, which I'm going to do
for a normal user. Now here I'm going to
create a user over here called AWS Iphone CLI. So this is for exclusively
accessing the CLI interface. Here, you don't
have to give Admin, um management console access, so you can just leave
it without enabling it. You click on next, and you
can actually add it part of an existing group or you can add policies
directly to it. Okay. Like I need administration
access because I want to do administrative
access over that. So I would select that and
go to the next screen. Now, just click on Creates. This should end up
creating a user. It has not created any sort of access to this user so far. So now click on that user name and then we will create
access to the user. Now, access to the user can
be through Admin Console. By enabling Admin Console, you can enable the password
for Admin Console. That is something we will not be using for this user because this user is generated
for Command line access. So for that, you have
to create a access key. So you can see that
Access key is for programmatical calls like
to AWS from AWS CLI. So CLI is also part of it. SDK is also part of it. Power Shell is also part of it. So API calls is also part of it. Likewise, you have these items. If you're using any
of these methods to communicate to AWS, you have to create a access key. When you're creating
an access key, there'll be option over here. What is the usage for this? I'm going to use CLI, so I've selected that particular use case
in this scenario, accept the license, sorry,
accept the recommendation, and then go to the next one
and then click on Create Key, and then you should
have the access key. Now you have multiple
ways of doing it. AWS configure. Now configure is the option for AWS to configure to
a specific access key. Now when you hit Enter, you'll be asking for
a Access Key ID. So now I copy paste
Access Key ID over here, and then access
password over here. And then region is
the US East one. So if you're not sure which
region you're working, go to your home page. You will see North
Virginia over here, and you will see US
hyphen East hyphen one. So the same thing is what
I'm giving over here, US hyphen, East hyphen one. And the default output
format is JSON. So you can give JCN as well
as just give Enter. Okay. So by default, it
won't be asking you these two, nor
it will be coming. I've already tested this. That's why it's coming.
It will say no, no, none like that. I will say. So here you
give US Iphone Diphen one. Here you give Jason. And then OC, uh, you will actually start to see the items, I mean, popping up. So here you don't have to
give any kind of ID or something like that because
your key ID is unique enough. It will know it has authentication code for this
particular account itself, so you don't really
have to worry about it. So now, what you can do is you have actually
logged in to this user, which you have created just now. Using IM, you have if
you click on done, this password will be gone. Okay? So leave it as it is. Just go over to this username
click on that username. Then you will see what kind of permission you have
for this user. That is administrator is the
policy we have selected. So it's loading the policy. Give it a access it
again. Okay, let it load. Okay, you can see the
administrative access. So now that you have
administrative access, we will do some action
in S three bucket. So first, let's open
S three bucket. That's the easiest thing
we can do at this level. Okay? So, we will upload a file, we create a file and upload
it to one of the buckets. So you have all three buckets
over here, test bucket. Now, if you don't have a
bucket, just create a bucket. Just create a bucket called
test bucket with your ID. And once you have
the test bucket, you will see there's
no file over here. So you can use commands
like simple commands, like, um, like to list is the
buckets on your S three. So you have three
buckets over here, so you can do AWS S three, the service name LS. So this will simply list the buckets which is part
of your Amazon S three. Now you have three
buckets over here. Now, what does
this three buckets we'll take Test bucket
for this example. We will create a file over
here called demo, so cat Demo. I'm sorry. Demo, this will
write items into this file. This is a demo file. Control D, after hitting Enter, just give Control D. Or
you can also do VI demo and then write this content and also save on with the
file. So that's up to you. Now you can do a simple AWSs three copy of this
particular file, demo file, two a Cons lash. Test bucket. So this
basically will copy the file from this location from your system location
to your S three bucket. Features do a refresh over here, you'll see the demo file. Okay. So the demo file has been transferred over to this
particular test bucket. If you want to delete this one, you can use the demo
as in the name. And then you can do RM, and then you can select
that particular hule. Now, this should
be removed. Okay. Very simple command
to ensure that your functionalities are
working from your CLI. You can also do it from this
command prompt test boot. I'm opening a command
prompt on Windows. So this is my Windows
command prompt over here. So just a second. So I opened a Windows
command prompt over here. So now in this Windows
command prompt also, I can do this configure. So now I hit Configure, it will last like this, so you go back to this thing, copy the access key. Paste it or a secret
key, paste it, and then here type
US phone typh one. Let's confirm US hyphen one. That's the one. Then JSON. Now I've logged into that item. Now I will be using AWS s3s. So I get that list. So
now you can influence it. Now why I logged
into this system is because I'm going to turn
off this virtual machine. You see that easy to virtual machine is
running over here. So let's go where's easy to. So yeah, so you see
this server is running, and that's where
we are right now. We are actually on that server. So let's turn this server off. Okay? So that's what
we are going to do. So firstly, we're
going to understand the instances description. So AW is easy to describe
I phonce instances, sorry, describe Iphon instances. This is going to give
you the output of, you know, the instances
details over here. Just give a space park and reach the bottom of this uh, detail. Now, once you reach
to the bottom of it, the bottom has very
important information. Your public IP address, let's compare if this is
the public IP address. If you just go over
here, edit the session, you can see 52, 55, 22, 203, and that's the same thing over here, internal IP address. What is important is
this instance ID? This is what is important.
Now, what is this? This is basically
your Instance ID. So this is the one. So you see this instance ID,
so it is the one. So if you want to control
stopping the instance, so you have to use the
Instance ID for it. Let's see what happens
when I run AWS EC two, stop Iphon instance
with the instance ID. I need to give the instance ID, which is Iphone
Iphonstceiphon IDs. And then I paste the ID
over here and hit Enter. Now, Um, sorry. Let's instance S. So now you should see
this turning off. You can see that this
system will power off. And you were able to shut
down the instance from here. So there are so many options you can run as part
of your EC two. Like, you can create instance. You can create a lot
of other things, options, you know, which part
of your EC two instances. So, likewise, you can do
so many other options. And if you want to know
about specific service help, you can always type EC two help. So this will give you
all the options you can type for ECT. Okay. Likewise, you can just give space and it gives you all
the options over here. Likewise, you can control it. You can also start up the instance which you
have shut down, so you just give that over here. Now, this will not use
the same IP address. Once you have started, so
you can go to describe instance and you can get the description of
that public IP address. So you can see that the public
IP address has changed. So copy this public IP address, edit it, and paste it over here. Again, public IP address is basically an IP address
which is given, uh, by, you know, automatically autonomously without your
intervention because public IP address keeps
changing because you don't have a static IP address or we call
it as Elastic IP address. Elastic IPS is a
cost binding stuff. So which means that if you
take Elastic IP address, you have to pay some money for you to have a static IP
address on your system. Every time your system restarts, you will get a new
public IP address but your private IP
address will be the same. Unfortunately, we cannot connect it to a private IP address. Okay? So this is
extra information, which is not so much required for you
guys at this moment. But pretty much you were able to do some activities
on your CLI. There are so many
commands out there. I've just given you not even a percentage of what
I've shown you was not even a percentage
because there's so many things you
can do using CLI. Thank you again for
watching this video. I hope this demo was helpful
for you to, you know, relax a bit between
theory sessions. Thank you again. Take care.
45. Labs AWS Cloud Formation IAC: Hi, guys, welcome back
to the next video. Now do remember that
in your Git page, you will have the configuration
details over here, what are the commands
I've executed so far. I've put it across over here for you guys, for
your reference. Now, the content over here is also given for your
previous video. So do remember to refer this on the GitHub page and get this. You can copy paste
it much easier. Do remember WQ is basically
for saving and quitting. So this I put it normally
whenever, you know, you see a file over
here like a Pifive where I open this to
edit some information. Now let's talk about
cloud formation. Now in this situation, we are going to use
CloudFormation template for connecting and working
with our application. Now we would use
the same username because it has
administrative access. So in the same configuration
file over here, the credential is
already loaded. So I don't really have to work with any
configuration separately. Now, what I have to do is design a Yama file which has cloud
formation information. Cloud underscore
formation is three. Please ignore the
spelling mistakes here, and then I'm going to create
this as a Yamal file, which is going to contain
instructions which should create a S three bucket. Now remember to make
this ST bucket unique. I'm going to give my account
ID so that it becomes unique because I'm pretty sure that's something
which already exists. This name would already
exist for sure. So I'm just going to
go ahead and give the account number so that it basically creates this
particular S three bucket. Now, flow formation is an automation tool
where in which it is a default
service from AWS. Using cloud formation, I can create infrastructure as simple as creating a S three
bucket or creating EC two instance or you can
do anything with this, there's no limit to it. Now I'm going to run a command over here which
enables me to create a stack. So it's called
CloudFormation stack. Each stacks are created
and within the stack, you actually have
certain things running. For example, this particular file content will be
actually running. Before that, you can see that the template body is picked up from S three
bucket over here. So to have this file, you should correct the
final name over here to the file name which we have
created in this situation. So we are going to
create a bucket, ST bucket, and this final
name needs to be changed. What are the final name we
have created right now. All right, the region
is USEast one, that's a preferred
region for us, and the stack name is MS three. Now, let me quickly give you the console view for
cloud formation. So when you type cloud
formation over there, you're going to get it
to a different service. And currently, I have executed, I think one CloudFormation
template early point of time. So you may see some references towards it, please ignore it. Okay, now you can see
that currently there's X ray sample stack which was used to create
certain application. And that creation is completed. Now, when I execute
this command, you will see this MS
three stack would appear over here and
it will start working. Now, as this objective
of this particular, you know, file is so simple, it either can come as successful
or failure in terms of, like, there is some issue with the file name or some
kind of other issues, then it may come as a failure. And mostly, I think it
would end up in success. Now to deploy this,
you just have to press Enter at the end. This will make sure
that it deploys the Cloud stack by using this CLI command which is going to deploy
the cloud formation. Now using this
method also you can influence changes on your AWS. We're just looking at different
ways of influencing AWS, not just by your
management console, but also using command
line interface, also using cloud formation. If I want to create the S
three option and command line, there is an option to create it. AWS S three and then you give the create option
and what you want. That is a command line
method of creating the same thing we did it on the previous video
which is SDK using SDK, we created it as B Now lastly, we're going to use
cloud formation by writing Yama file definition which cloud formation
understands and executing through the
CloudFormation service. Now you can actually
execute this. Now you can see that it has
created an ID over here. Can use this command to
check, information on it. I can see that over here, it is in progress, it seems. So in progress. Now you can actually
see this also here. Now, you should see two of this. You can see create complete. So it has successfully
created, it seems. Now you can see create complete. You can see some information about this in output over here. There's no output. You can also check the events over here. Okay, so in the event is in progress progress
progress complete. Now if I go to S
three over here, I should see the new bucket. Okay, you can also get this information
from here as well. S three Ls so you see that
my example is three bucket, you can see that here as well. My example is three bucket. Now that bucket we have
mentioned over here on this file which
is cloud formation. You can see my example
is three bucket, that has been created
successfully. Now you have understood the
fourth method of accessing. Now in the next video, I will show you the
fifth method of it. Thank you again
for watching this. I'll see you on the next.
46. Labs AWS Cloud Shell: Hi, guys. Welcome
back to the next PDO. In this PDO, we are going
to look at Cloud Shell. This is the fifth method of
accessing your AWS service. Now for Cloud Shell, you need to make sure
that you are inside the AWS management console. But rather than having
a shell open from either installing
the command utility on your Windows server, making your command prompt look, capable of serving
your AW services, or you create a Linux server and installing the
same command utility. Yeah you have another option of dealing with this
using Cloud Shell. Now, Cloud Shell is this
icon over here on the top, and this icon will actually create environment
for you to work with. The feature, I
would say is it has the pre installed tool
on this Cloud Shell. This Cloud Shell itself is
a Linux system where it comes with the pre installed
tools like AW or CLI, Python, Git No JS, and more. So you can install other
tools as you need. Now do remember that it runs
on persistent storage as in one GB of a persistence storage
for each region is given. This storage reminds
across sessions, meaning that the files
and script you will save you will be able to see
it there when you return it. This thing you will have when you actually
have Cloud Shel. You will have a
faster access over here rather than creating a server and then
running it like we did, you can see that
this already has it. Now if I do Ss, right now you can
see it is displaying all my S three buckets
which I have over here. Now, how is it even displaying? I'm not logged in
using AWS configure. Well, you don't have
to because you already logged into this
management console. So through Management Console, you're invoking a Cloud Shell, which means that you are
going to have this file. So let's just create a
file over here. I'm sorry. So creating a file over
here called file A, right. So now that I close this
one and open it back up. So if you just, you know, if you basically remove
this Cloud Shell, you can actually restore
it back from here. So you will still see the file even after you log
off and login. So this is going to be
one GB given for you. Now, Cloud Shell is
already integrated with your AWS credential
and environment, so you don't need to run any kind of a special
configuration for it. So pretty much this is what I
wanted to tell you in terms of another way of accessing your AWS resources is
using Cloud Shell. But unfortunately, Cloud
Shell for you to enable it, you need to first access
the management console. And through Management Console, you'll be invoking the Cloud
Shell from within that. Thank you again for watching this video. I'll see
you on the next one.
47. Labs AWS Direct Connect: Hey, Is, come back
to this video. In this video, we
are going to talk about AWS Direct Connect. Now in this direct connect, we are going to understand
how it's different from VPN and how it's different
from public Internet HPO. Let's get on with this video. Now we have these three items which you are comparing
from the last video, we understood what is VPN. Now we can understand what is direct connect and then
how it's different from VPN and how it's different
from public Internet. Fun first understand
how it's different from public Internet because
this is an intranet. When you're accessing it
from company and you want to restrict your VPCV to be
accessed from Internet, then the only way of accessing it is basically
through Intranet. Internet has one more item
over here called Direcconnect. Now, Direct Connect provides a dedicated private
network connection between your on
Promise data center. Office where you
work from and AWS. Unlike VPN does not traverse
through public Internet. VPN requires, as I said,
on the previous video, VPN Internet for you to connect to the VPN services
or endpoints, right? But in terms of a
direct connect, does not work with public Internet offering
a high performance, low latency and more
reliable connection. That's the main sell
point of direct connect. As the word direct connect
means it's going to directly connect to your
Onpromises data center. So you can see dedicated
network coming from AWS to your data center and
then goes back to AWS again. This means that your data
center is going to be connected directly
using private network, and it is low latency. Something which
you can understand from this picture itself. Now, what are the key features? Key features includes a
private physical connection to AWS bypassing
public Internet. So direct Connects
offers low latency and more consistent
performance compared to Internet based connection. It is also you can choose from various bandwidth
option, for example, 50 MBBS to 100 GBBs of a data transfer you
can have between uh, your data center and your AWS. Now, what is the use case here? W situations you can use this? You can use it for companies
that frequently transfer large amount of data
between on promises to AWS. So this is majorly
suggested or used for a hybrid Cloud solution
or hybrid Cloud model. And the performance here. The application requires a low latency high performance
network connection. So this is the best
solution for that. Enterprises which uses both on premises and Cloud resources, which needs a stable
connection between them. Either they are connecting
database on premises which has the customer data and their application
is host on on AWS, which really requires
the data to be transmitted in encrypted format and served to the customer. So in that situation
or in that process, you need a highly um, you know, stable
connection between them. This is one of the methods you can create the
stable connection, and this connection
doesn't require Internet, which means that your
data is not going to get compromised at
any point of time. So that is something
you should remember. All right, pretty much I've
covered this in terms of the theoretical stuff and you know the difference between a VPN and Direct
Connect as well. So let's go and do the hands on. So in terms of Direct
Connect, it is coming as, you know, separate service itself on AWS called
Direct Connect. Now, when you go
to Direct Connect, you can actually see
that documentation. So it is to make it easy
for you to establish dedicated network
connection from your own promises to AWS. So which means that your office, your data center or, you know, co location enroment can have a direct
connection with AWS, and it can be a
private connection. It is sorry, it can be. It is a private
connection between them. So you can see the
features over here, like reduced bandwidth cost, consistent network performance, private connection to AWS, and it is elastic as
well because it can spin up from 50 MBS to 100 MBS. Okay, that's the max connection, but here you can see
that the connection provider one GBBs to ten GBBs. Likewise, you can scale this. As you can see over here, to create a connection, just click on Create a connection. Here you say what kind of
resilience level you need. Maximum resilience, you
have high resilience, you have development
and testing. This is the lowest
non critical workload with lower resilience over here. So then give next and then you can select
the speed over here, which is from one GBBs
to 400 GBBs over here, and the location
you want it to be. So here is the location of
your data centers over here. So this is your AW site, the location which you want to connect connection is located. So that's a AW site over here. So you can see that it is in this particular location
and the service provider. So the service
provider helps you to give the connection
to the data center, and there is additional
setting as well and towards some additional
settings over here. Then there are this
connection summary so you can see the
eight provider. There are two connections
coming in from your location and review and
create will create this. But you see this this
cost about 0.6/hour, about 4:39 monthly usage for port usage and
additional data transfer, you can click here. For data transfer, again, just getting charged and it is going to create
two connections. So here, this will provide the resilience against failure. If it's going to be
a larger connection, you can actually see the
maximum resilience over here. We actually have a
larger information. First location, you
have this provider, the second location is a
backup location over here. And then you have
another provider. AT and TF, we have
already selected, so let's select Verizon. Here I have four connections giving over here, $90 per hour, about 65,880 monthly and data transfer charges
are separate, which means that you can see that billing begins
once the connection between AWS router
and the route and your router is established
or 90 days from the order. Whichever comes
first. So likewise, you have the option
of creating this, and this will enable
the direct connection. Direct connection is
much more expensive than VPN itself because it uses the private network
to connect with it. So this is pretty
much I want to show you on the demo side of things so that you will get the gist of what we are actually
doing over here. Thank you again for watching this video. I'll see
you on the next.
48. Labs AWS SDK with Python: I spoken back to next
video in this video, we are going to talk about SDK. One of the topics we have, which is over here as part of your program is
using SDK as well. Now, we are going to quickly
see a demo here towards SDK. So this is API is SDK CLI. So we've completed the CLI, so it's time for
us to explore SDK. Again, this is not going to
be part of your examination, so you can very well skip this. I'm just going to show
you one simple, um, SDK oriented stuff over here, which you can use
it on your AWS. Now do remember that, let
me log in again once again. I've just logged
into the system. So here we're going
to talk about SDK. I'm going to use one
language over here, which is frequently
used, which is Python. I'm going to do the hands
on session using Python. For that, you need
to make sure that you have Python installed
on your system. Do a pseudo um sorry, there's a lag in the system. Pseudo yum install
Python three Iphone Y, see if it picks it up. Python three is
already installed. So let's install
Python P PIP IphonY. Python PIP is required so that you can install
the Boto three. Boto three is your Python
driver for your SDK kit, which is basically for your
AWS SDK configuration. Now you can use the PIP
three command over here, and then you can
install Boto three. This is basically your
Python configuration for, um, you know, your AWS. This will help you connect to
AWS and run SDK from Boto. So you can see that over here, it is installing that 40 clo Boto three root
one. Uh, version. I'm sorry, boto
three, 11235 version, Boto core, and also it is
installing a three transfer. Now this is going to ensure that you have the STK over here. Now when you run any Python, um, you know, commands
or Python scripts, which is going to
have AWS call on it, this is going to actually
get the items from, you know, AWS from the back end. Now now previously, we
have configured AWS, so you can just
check if it is still there, just dial three. It's still connected
with IS service. Now, you have
alternative way as well, so you can just open this file, which is in your home folder. Call AWS folder. And in this, you have a file
called credentials. So you can actually save your credentials over here. Now, you can look at that. The credential is
actually saved over here. That's how you log in. So even before you do this, you can put this
credentials right there, and then you can access it. So you don't have to run AWS config with the
credential details there. Now that you put the credentials there and it is
there by default, so you don't really
have to worry about it. Now I will create
a file over here, which is a Python file, and this Python file will
list the S three buckets. So now this Python file, we will copy paste the code, which I'll put it
out there for you. This is a Python configuration where you are importing b23, which is your SD case of a development kit which
uses AWS on the background. Here, create an S three client which is basically
your S three bucket. And here, this response is S
three list bucket response. Then here we are printing value saying existing S three bucket, and it is printing one by one buckets which is part
of your response over here. Let's just say one
quick this file. And then let's run Python. And then you can actually
run, uh, the Python three, which is one which
is in store with the PUI and it has actually shown you
the output over here. So existing buckets,
you have three buckets. So this is basically, um, you know, running this through an SDKs software
development kit. So this is another method
of accessing your existing, um, you know,
infrastructure in your AWS. So we have spoken
about, you know, management console, sorry,
management console, we have spoken about CLI. We have spoken about the uh, using Python through
SDK as well. Now, do remember that you
don't have to run AWS Config. So if you're connecting
with a new session, you really don't have to
run AWS Config over here in this scenario
because you will be actually loading the
USM and password, I'm sorry, key ID and key access code directly on the file which I
mentioned earlier. So this file will
actually pick it up, so you don't really have to run the AWS config
in this scenario. And this Python file will execute because it
will go to Boto Boto, go to this particular
credential files and then connect to whatever
the account it is mapped to. Then it will execute the logic
within the Python script. This is how it basically works. Thank you again for
watching this video. I'll see you on the next one.
49. Labs CLI Introduction: Hey, guys. Welcome back to the next s we are going to
talk about AWS command line. Now on our training, we are going to discuss about
one of the features here, which is basically for
your programmatic access. So we need to create a
user for it to access, so we will do that in some time. But as of now, what I'm
going to do is I'm going to have the command line
interface started for us. And for that, we need to have it downloaded for different
operating system. To access Command
Line Interface, just Google AWS download CLI, and you will be able to access this Command L Interface page, which is aws.amazon.com
slash CLI. Now, this should get
you to this page in which you have documentation
of running this tool. Well as other tools as
well, which is part of it. So here you have steps which is involving towards this version which you're using as ALI two, which offers new features like improved installer, new
configuration options, and identity center as well, which is a successor of AWSsO. Various features are there. There is also AWS S, which is a developer review. So that's another command
line shelf program which provides you the uh, um, you know, convenience
and productivity features. So some of the usage over here, how you can run some basic commands using
command line interface. So these are things over here. You can also see about auto scanning help and
create auto skiing group. Likewise, you have so many options over
here which you can use. There is a complete references over here in terms
of working with, um, you know, more references towards the command
line options, which is also given
as part of this. Well, we will explore some
of the basic aspects, as you know, that
this is, I mean, we need to know about
Kamana interface, but not operated completely, so that's not part of
your certification. But just for your understanding, we will be actually working
with the installation of it. We'll be working
with um, you know, some of the default
commands we are going to look at also in terms of CLI. Okay? So that is something I'm going to do
as part of this video. Thank you again
for watching this. In the next video, we will
see the Windows installation, MCOs installation,
Linux installation, and also tell you about Amazon Linux installation, as well. Thank you again. I'll see
you on the next video.
50. Labs Linux CLI Installation AWS Linux Upgrade: Hey, guys, welcome back
to the next video. In this video, we are
going to, you know, shut down this server,
the previous server, which we used earlier and
then terminate it actually. It's better than shut down. And then create the
Amazon Linux one. So Amazon Linux one, we have already done it, so you can follow that video again. It's just a change in the
operating system selection now. So you can see that system is powering off, so that's okay. So open this, edit
this existing one, and when you edit it, just
change the IP address. Everything else is the same
except for the username, iphone use EC two, phonUe That's the username. So just click on Okay,
then double click it. So if you get it connected. It's taking a bit of
time. There you go. Now we are connected
to EC two instance. Perfect. Now once you're connected to the
Easy two instance, all we have to do is um use the curl command
to download it. The same stuff. Um remember, in this situation, you will have AWS CLI already installed. Let's look at that. Let's just
first download this curl. Okay, this file is downloaded. Okay. Now let's just run AWS and then type
phoniphon version. So see if it is
already installed. Now, I can see that
the old version is in stored, which is 215. Now, we need to upgrade
it to the latest version. Okay? So this video, you can consider it as running that upgrade which we
have planned to do. Zip this file. Zip is also
variable in Abs Linux, so this is a good thing. You don't have to install
this package separately. Now that you've unzipped, go inside the AWS and
run sudo dot slash. It's better. I copy based. Copy past the command from here. I'm not typing any kind
of different command. I'm just typing
the same command. Dot slash install the
default location. If you're not sure
about the location, you can always type which
AWS. That will tell you. Okay, so it is in a
different location. You can see that user bin AWS. So you just have to type the
home location of the AWS. Um user I'm just looking
for the AWS home folder. I don't see it under here, so let's see it's
a local has it. No, it doesn't have
the home folder there. All right, so I did search
for some documentation, but there's nothing like that. So we proceed with
the same command. We are just going to copy this
command for upgrading it. So there is the home folder is in a different
location, right? So now, if you type the Bin folder is going
to be in this location, use a Bin folder. The installation
location, what I found so far was not in this location. So the one which I have
found is on the OPT AWS. So location it's so I'll just try with this directory,
see if it accepts. So you can see that.
Now you can run this. Let's see if it works right now with the latest
version. There you go. So it is updated
with 218 right now, I've given OPTAWS because I
found this folder on OPTAWs. It's very weird to find
it in this directory, but as this operating
system is a little old, so that's the reason
why it is there. So now you have the V two
ocean as part of the OPTAWs. So this is how you
need to do it. I will copy all those
custom command which I executed on the uh, on your GitHub page called AWS, I will put the Github
page link over there. So this directly
gets committed to Github the Notepad
plus plus one. So I will copy paste
it over there so I'll call it like grading
your AW CLI. So what we have covered so
far is basically installing installation of of AWS CLI. So that is something
we have done, and we have no the command, so I'll just copye the
command over here. And we have also seen how to upgrade successfully
your AWS CLI, and mostly this
command should work, but in case if your AWS is
located in a different folder, then you may have to
change it to that folder. So I've shown you
how to do that as well in terms of a different directory,
I've shown you as well. So that's pretty much
what we want to cover on the CLI part of things.
Thank you again. In the next video, we will see some commands which we can
execute as part of CLI, and I will see you
on the next video.
51. Labs Linux CLI Installation Ec2 Instance Creation: Hey, guys welcome back
to the next video. In this video, we
are going to see how to do Linux installation. But I'm going to split this into multiple parts so
that, you know, it becomes easy for you to do it because Linux installation
is much more longer, and as we are planning to do
it on the EC two machine. Now, I've not given you information on how to
create a Z two machine, so I'm just going to do that as well as
part of this video. Well, we have created
one EC two instance, but we have never launched
it or worked with it, so I'm just going to recreate
that completely again. So for that, we have to
firstly understand the CLI, there are certain prerequisites for it for you to install. There's a whole
installation page actually, if you're looking for,
what is this again? Um, here, you will have the complete installation steps in terms of working with
Windows installation, MacOS installation, as you can see over here and
the Linux installation. From the previous page, we
have seen the EC installation like EXCFle MCOs package, which is exactly like EXE file. If you're looking
for MCOs package, the Windows package
is exactly the same as MCOs package as well, so there's not much
difference there. Now, for the Linux installation, the instruction
is quite lengthy, as you can see, just keeps going down because there's a
lot of components to it. Now, let's first understand
the prerequisites here. This is saying that you
need to have Zip as part of your Linux package to
have this install. You need to have, GLPsy, GRO as well as s as part of your Linux package if you want to install
it with your CLI. And the CLI is supported
for sento Fedora, Ubantu and Amazon Linux, likewise, and also
Linux ARM based. So because AWS does not support third party
repository other than SNAP, so SNAP is a tool which is installed in
LinxOperting system, which is repository
manager just like YAM. AWS only supports
SNAP over here. So you will find instructions
towards that as well. Now do remember that if
you are working with Amazon Linux and you have the AWS CLI, which
is pre installed. So if you can see over here, Amazon Linux image comes
with AWS CLI pre installed. Now if you're looking
to upgrade your AW CLI, you first have to remove it. So just have the
command to remove it. They have given you the
command for removing it. And once you have
removed it successfully, then you will be
able to, you know, install using the three
different methods. The first method is basically
downloading the file from the Amazon AWCali
amazon aws.com. So this is the oficial website where you will be downloading your package from the zip file for you to install
this on Linux. So again, ZIP is required because you will be downloading a Zip file
format for Linux. So unzip is a separate
package which you need to install if it's
not already installed. And you have AW CLI. We are actually
renaming this file name to AW CLI V two dot zip file, and we are zipping it
using the NZP command. And then we are
installing it using sudo the unzip folder. You will have a AWS folder
and then we're going to run the install
command within that, and that's going to install your AWS CLI on that
operating system. Advantage of this is
basically you'll be downloading the latest version because it doesn't
have any version, so it would have linked
to the latest version. So unlike this, it will not have the default version when the
putting system got ready. So you have Amazon
the next 2023, but there's no 2024, but the CLI would have the
latest version over here, which means that you won't have that latest
version over here. For you guys, who is installing
this as part of that, you first need to remove it and then follow
the instruction. I'll tell you how to
do it step by step. So we will be creating
two separate videos, one for the defol
Linux, bond flavor. And we will also
create one for your, you know, Amazon Linux
as well. All right. Sounds good. So let's go
ahead and do this hands on. What we have to do firstly before we start our hands on is that we need to get to the EC two instances over here
and create a new instance. So if you have a instance
created already, just remove it and
take Clunch instance. Now, I'm going to use the
instance with the name of AWS CLI as we are
trying CLI over here. So one instance, you can
create it with Amazon Linux, another one with UpanTF we
will create Ubundu instance. Select whichever is
free tire enabled, and you should be able
to see that it is choosing 64 bit X
86 architecture. And then T two Micro,
that's enough for us. Then in terms of Kepare, I'm going to use the
existing keypad over here. If you don't have
existing keypad, it's good time to
create it actually. I would recommend you to highly create a PPK
file out of this. Click on Create and then
click given name over here, which is something which you can repeat and save that file. Select PPK over here
because we'll be using Put as in through mobile system. So make sure you give PPK because when you edit the
session over here, right? I've also given PPK here. You can see that PPK. So this is basically using Putty for us to connect
it from the mobiles term. So you use PPK and then
click on Create a pair. So once you create a pair, it will download to
your download bar so that you can actually provide it when you are
connecting to the instance, later point of time using mobile Mobile exterm,
you can download it. It's much easier and you have
the unpaid version as well. So this is a trial version
or a free version or trial, in which you can use it
for non official purposes, and you can see the
network changes, LOS such from anywhere, so make sure that
this is enabled. Don't have to enable
anything else. And then you have the EGB configured and then click
on Launch Instance. This should take about
a couple of minutes and this should sort out the, you know, Amazon machine. I'm sorry, you open Do
machine for your AW CLI, and then you can launch another instance
over here or you can wait for that to be over and then recreate
this or however you like, I will do it in one stretch. Aw CLI, Amazon Dinux so do remember that you can have
only one instance as free, so you do it one by one. So I don't mind if
it charge for me, so I'm just doing it anyway. I'm choosing Amazon
Linux over here as the name is also implies
that it is Amazon Linux. I'm selecting the same instance. Giving the information about my keypare the one which
I'm using by default. And then here I'm giving information about SSA not
changing any information, going to be the default
information on launch instance. This will launch an AMI machine. Amazon machine image, so it will use Amazon operating system
as well. Part of it. You have two
instances running one is a Arab CI for you guys, it will be only one instance. You can do this one later. The only change between these two is the
operating system. And when you change
the operating system, do remember the
username also changes. For Ubundu it's going to be
U Bundu as the username. For Amazon machine image, it's going to be
easy to Iphen user. That's going to be the username. You can also check
that when you're creating on that
creation process. I know it, so that's why
I have not checked it. You can also validate it
by right clicking Connect, and you can actually see
the username over here. That's another way
of checking it. So, um, so far we have
completed the item. Did I Did I close it? I guess I have closed it. Sorry about that. Yeah,
so I'm sorry about that. I think I closed it. Well, so now that you've
completed the first stretch, we have completed the
instance creation over here on the CLI. On the next method, we are going to access
this using mobikStem these two service
using mobitem and then we are going to
install AWCLI on that. Thank you again for watching this I'll see on the next one.
52. Labs Linux CLI Installation Offline Method: Hey, guys, welcome back on
the next video in this one. We will see the different method of downloading the package
and installing it, and it is also called
as offline method. So for example, you have servers within an infrastructure which is not accessible
to the Internet. How would you use
nap to install? Well, you cannot actually. So you will be actually
downloading this file, sending it across and then installing it through
Command Line method. This is basically an
Internet free stuff where you will have Internet
on your jump server. Or in which you will
do this operation in a jump server and once you
complete the operation, you shift this package to the actual server and
then get this installed. Now, I'm just going
to copy this command. Previously, you're
going to install, I think 218 version of AWS CLI. Let's see which version
this one gets installed. This is a curl command. By default, every
operating system will have curl command. Curl command says that output file should be in
a customized file name, you will see that
file downloaded. Now I'm going to use
the unzip command. Okay. An Zip is not there. So I can install with Snap
itself, Snap install Unzip. So Okay, it's not able to get it. Let me try APK. So the APT should download it. Sorry, if I said APK
before, I meant APT. So let's just try
sudo APT, Install. I'm sorry. Install nz. So it's downloading
the package of NZP. So now let's just
use this NZP file to unzip this picular file. The system is a bit
laggy because of, you know, the lower amount of configuration we
have in the system, so it's a bit laggy. The network is in grade
as well, I guess. I can see that it is unzipping
this particular file. Now, this file, you can
basically unzip it on one system and you can
create this as a tarball. You can just use star
Iphone CV of and then create the
tarball like this. And then this should go
with any of your system. You may have a question,
if I'm not able to install or download this package, then how
I'm going to send it. Now you have a tarball. Now you can send it across
to multiple systems and then um have that installed. Look at a file size over here. It's 223 MBT star ball. You can also guns
up this tar ball, which will even reduce it more. I'm sorry, Gs, the Tarpl which will reduce it even more so
that you can send it across two systems which doesn't
have Internet ont. So because this
jump boox will be designed to connect to
those servers, right? So you can see that. It's even less than the Zip file itself. So what is the next step? The next step is you
can either access the AWS folder and run
Doslash install command. Now, this install command, I think you need a root user. So you need to do psudo and
then dot slash Install. So this should get your
all the AWS stuff located. And as this is in use
of local then you can type AWS iPhone version. Okay, so it is saying it
is going to snap. JAWS. Now, you can see which
AWS will give you the full path where
the AWS is installed. If you want a latest
version of it, also, you can upgrade it. That's something. Now how do you know
which position or which location
it is installed, you can use which AWS command will tell you the
location of it. If we look at this command, it is telling you to update
an existing installation. You have older version of this and you have downloaded
this zip file, all you can do is run this
command with the location. Now, you can see that it is
installed on user Local bend. You got that output, so you put this use local bind, which is the default
location, of course. Until and unless you don't use Snap SNAP is creating in a different location
as you can see. It's going to slash SNAP and then it is putting all
those files in there. But if you're using it through
the normal installation, then you can actually
use this command. To upgrade your version
of your ub CLI. It is the current
location is the same. We have to check
the installation directory if this is the same. So let's do the LS I and LTR. Yeah, you can see that
installation directory exists, right? So this is pretty much that. So we can execute this command. If you have a later version
or latest version after 29, you can execute this command. And what this command will do is like you already in
the APs folder, right. So you just need to remove this. So it's just lagging a lot. So to install. Lot
slash Install. Now you can see that
skipping in store because it has detected the same version
you're trying to upgrade. This kind of upgrade
also you can do if you have a older version install and if you're planning
to upgrade it. On the next video,
we will see about Amazon Linux and we will update it to the
latest version there. Thank you again for
watching this video. See you in the next.
53. Labs Linux CLI Installation through Snap: Hey, guys, we'll come
back to the next video. In this video, we are going
to see how to utilize this particular machine and
connect it with MbixtEm. Now, this is one of the videos. I've not done it so far, so you can install this
mobile extem which is free. And then you can actually
create folders like this. I'm sorry, folders
like this new folder, and then select the AWS CLI. That's the folder
name, and then inside that you can actually
click on your session. Then you can actually go to your AWS CLI and copy the
public IPR from there, put it over here, take the
username from the connector, right click and connect and
the use name over here. So you can specify user name and then place that
username over here and then give that AWS CLI Open too as a nickname for it so that it will disappear
it will appear here. And then don't forget to
give Good Advance and then use Private K and give the
folder file name over here. For me, I have a file name
already here as part of this. I'm just going to copy
this location for me. It's not letting me copy. Let me just edit it
and then browse it. Just click on here and
you will be able to browse the location and then you can pinpoint
the location. All right. I have pinpointed the location as you
can see over here, this should be good
enough for me. So click on Okay. And then
just double click it. I will ask you whether you want to accept the authentication. Um, Okay, that's weird. Saying don't support
authentication form. Let's just try this again. Seems like it has not
taken that location, so let's just try again. Okay, so it says you have
reached the maximum number of save section. Okay, got it. I got to tell it some sessions over here because I'm
in the trial version, right, letting me save
more sessions over here. Let's click this right now. So it will be asking you
a question over here. Do you accept, do
you accept to share the trustor and
something like that. So you just have to say yes, and then you should be
able to enter into it. Now I'm in the U Bundu
operating system, as you can see over here, and AWS command is, you know, not working. So there are multiple ways of
installing AWS DS version, and you can see that
app install AW CLI, you have SNAP install AW CLI. So this is one of the
supported items over here as we discussed
on this document. So when you're clicking on
installing multiple methods, right, so you have download and install,
which is this one. You have ARM for ARM Linux,
how you want to install it. You have just a different
architecture package over here, and you have the Snap package. So for Snap, you just
have to use this command. So you just install AWS CLI
and then get that installed. So if you just see that, um, pudo sudo, I guess. So we're just going
to type sudo. Snap install AWS CLI classic. Now, this is going to
download this package and it's going to install
not just this package, but also its dependency. Here you will see AWS
iPhone Iphone version. So now this should come
up with AWS CLI, 218. When I type AWS, now you will see that same help command which we
have seen before. This is one of the easy
method of installing it. So if you want to do the other
method of installing it, also, I'll show you
on the next video. To remove this, I think the remove command should
be removed over here, this should remove the package. Okay, you can see that
that is removed now. When I type dudes I
have Iphone version, you can see BDS not found. So pretty much simple method
of installing using Snap. So on the next video, I'll
show you the other method of installing because
sometimes Snap won't be part of your
operating system. Like, for example, if you go to sentos or any kind of
other operating system, you won't have Snap, then it becomes a
complication for you. So I'll show you the other method which is predominantly used
by major people, so I'll see you on
the next video.
54. Labs Windows CLI Installation: Hey, guys. Welcome back
to the next video. And this video, we are going
to install the Windows CLI. So if you just click
on it, this should get your Windows CLI
downloaded to your system. If you click on the MacOS,
it's going to download that, likewise, we will first complete the Windows
installation. So for you to know the
Windows installation, it's going to be there here. We just go to options, you're going to find download
over here that should have. So as you can see over here, I have already downloaded this multiple times,
as you can see. I'm going to open this file. Now, this comes with
the information of what you're going to set up. So this is a version two of your AWS command line interface. So just click on next. Agree to the terms and
conditions over here. Just click on next
again after agreeing. Here it is selecting
the path it's going to install your CLI version two. Click on next. And then
click on and install. Well, Mac operating system
is exactly the same. I don't have a Mac
operating system. So if you download the Mac
operating system, open it, so it's going to tell you or show you the same exact steps
which we just followed. So now we just need
to install this, and then we just need to click on Finish once this
installation is completed. Now what happens is
like it is going to be in build with
our command line, so it's going to be used as part of our Windows
command line. I'm just waiting for
the product to install. Okay, so now that
installation is completed, click on the finish button. Now go open a
command line window, so I'm going to my run prompt. I'm typing CMD on my run prompt and then
clicking on Okay. So I'm going to another
colon over here. Another command prompt is open. Now you can see that this is just an ordinary
command prompt. You can do the same
thing for Macos as well. Open your command
prompt type AWS. Previously, if you
were type AWS, it would have said
command not found. But this time after
installing the CLI, you can see that this
particular command is working. So AWS help, and you will get all the available
help topics over here. So you just give a
space bar over here, so it will display all the
available options over here. Can see whatever you can
do from the console, you can actually do it
from the AWS as well. I'm sorry, AWS CLI as well. Likewise, you have
all the commands over here which you can use. So these are the items. So basically, these
are the services which is offered by AWS, and these are the different
ways of accessing them. We will look in detail later point of time so
we will understand a separate session
on how to work with your AW services
going forward. I will give you a few
items on SS three, as well as easy to
instance control as well. So I will just teach
you about that. Just to get you
started on AW CLI, so that in the next training
or the next certification, you wouldn't find
it challenging. So this is pretty much what I want to
cover on this video. Thank you again for
your time and patience. I'll see you on the next video.
55. Labs Availablity Zones: Now that we understood
what's a region, let's understand what
is available T zones. Available T zones are basically data centers,
which is available. Now, within a region, you may have at least
two available T zones or it may have currently, I see a maximum of
six available T zones in the US East one region, which is Northland Virginia. So in this region,
you see six AZ, which means that these
are six Theta centers. Now, this six Theta center will have subnt
assigned to them, each classifying one subnet name assigned to each of
the availability zone. What is a subnet subnet is a group of IP
address which can be assigned for the
services which you are hosting in each of this
availability zone. Now, that's
configured under VPC, okay, Virtual Private Cloud. So VPC takes in a
lot of subnets, which is classified to
each availability zone, and those will be configured as part of all the services which
you're going to configure. So if you're configuring
a EC two instance, you'll be actually
configuring, um, you will be actually
what you'd be configuring as per
the EC two instance, you will be configuring
what is called a subnet. Likewise, you will I'm sorry, you'll be configuring VPC as part of your EC two instance. In that scenario, you will be mapping your virtual
private Cloud which has a subnet and
each subnet will have each of the
availability zones, and then you'll be
creating service on that particular
availability zone. We have seen that process also. I'll just show you that
process one more time. So as we have seen regions over here on the previous video, each region has
availability zone. Let's take, for example, this particular Northern
Virginia region, which was launched in 2006.
This is the first one. So if you see that, Ireland is the second one,
which lasted 2007. So as you can see over here, you have the six availability
zones over here. We'll talk about local zones and wavelength a little later. But availability zones is
basically a data center. So how many data
centers are there? There are six data centers. If you want to list
the view over here, you will actually see the
regions with availability zone, so you can see the number
of regions over here. There are through
recency Osaka Japan, I'm sorry, Tokyo
Japan, which has four. COL has four over here as an
availability zone over here. Now, you can see
the overall one. So it has 41 availability
zone, 13 region. So this is Asia Pacific
and China region. So here you have the region based classification and
where it is coming soon. Okay, so it gives
you a complete view on everything you have. And then this is
region by region, you know, classification
over here. So that's pretty much. So let's understand what is
availability zone. Availability zone,
as I said, you, there are 108 availability zones totally with 34 regions on it. Okay? Now, 34 regions each with multiple
availability zone. You have a total of
108 availability zones which is available for you. So this is nothing but
your data centers. Now, can I know the location
of this data centers? Now, actually, you cannot because
availability zone something, which AWS keeps
it by themselves. You will only get to
know the main address of a specific location,
specific region address. Apart from that,
you'll not know. But how else can you know a
little more detail about it? If we go to Virtual
Private Cloud, your default VPC will
have the details. So your default VPC,
when you click on it, you will see the
subnets attached to it. Basically, when you configure a subnet, you will see all this, you know, available uh, items over here in terms
of availablet zone. So if you go to subnutl can create a subnut
you select a VPC. Now you have the option of selecting the
availability zone. So now you can see
that these are the availableilt zones available
in this particular VPC. So this VPC belongs
to Northern Virginia. So if I'm saying that I want to check on
the Mumbai region, so I'm middle click
on the Mumbai region. Now, I have a VPC open
here on Mumbai region. Yes, you can actually open multiple tabs with different
region configuration. So for a latency test, I have created, you know, VPCs over here on the Mumbai region to
check the latency test, which we will come
later point of time. So when you are on the
VPC of Mumbai region, you will see that you
have a VPC there, which is configured by default. This is the default VPC. When you are configuring
subnet over here, and you concrete subnet and you select ten sorry,
the default VPC, or if you're creating
a new VPC and you want to assign
subnet to it, Again, if you come down to a
variablety zone over here, you will be seeing the
availability zone, which is available as part of this particular region,
which is Mumbai region. Mumbai region has three
availability zone. The same thing. If I'm
picking up Oregon, sorry. So here, when I choose, create a subnet, let's create a subnet over
here shows a VPC. Now you will see
the avaibt zone, which is four availability
zone on Oregon region. Now, this tells you the number of data centers which
is available to you. Uh, in this location, in this particular region. So this is how you get to know how many availability zones are there and how
to use them, okay? So right now, you don't
know nothing about the networking or your
subnet or anything, so don't worry about it. But I'm just giving you
how to have a look at the different uh,
items available. Normally it is mentioned as A, two B, two C, D, likewise. The same thing over here on
the architecture as well, you will see the
availability zone mentions the same
name as the region except that it adds a
character right next to it, called ABCDE, EF. Here it does six
availability zones, so it just starts from A, and then BCD E F, so it goes till F. Likewise, the availability
zones are named. So if I'm looking at
this particular one, so this availabt zone, um, region name is USS two. So it is called as USS two A, BCD because it has four
availability zones. All right. So this is what you are supposed to understand
from this video, and this is to
make sure that you understand what is availability
zone with its hand zone. So do remember to check
out the next video which gives you a
little more detail about availability zone. Thank you again. I'll
see you on the next one.
56. Labs Cloud Front setup a website: Hey, guys. Welcome back
to the next video. And this video, we are going to talk about CloudFront hands on. So here we are going
to create a website on Cloudfront and we are going to browse it through the cache. So this is pretty much a simple, hands on experience
towards CloudFront, and it's so simple that we have to create one single file. So now I've logged into the EC two sion I've
created earlier. So I'm pretty sure that you have that I've not asked
you to delete it, so you must have that
running over here. So this is called
AW CLI Ion machine. I'm using the same one
which I created for CLI. Now what I'm going to do
is I'm going to create a file over here
called index door GMO. Now, this is going to be
my index page of GMO, so I'm going to give a
H one tag over here. I'm going to close
this H one tag, and here I'm going
to have the content. This is my website from Pocket. So I'm going to create this
kind of HGMO page over here. I'm going to use a
HGMO tag over here. Just to indicate that
it is a HGMO page. I'm going to close
this GMO as well, and this is going to be
the content over here. This is basically highlighted
content over here where it's going to say this is my
website from S three bucket. Now save on quit this file, this should actually have
this file looking like this. Once you have this file, you
can upload it to S three. Just make sure that you
are still connected with AWS because you have
anyways we connected, I guess. Let's see s3s. S if you connect it,
yes, you are connected. Now, firstly, we
will create a bucket or we will use this
test bucket only. My example, S three bucket itself we will use
as our website name, and then I'll push this
content over there, S three copy, index or SMO. To this S three bucket. I just by giving S three
colon slash slash, copy it to this bucket. I'm sorry, I should be CP. I should get it copied. Now in my ST bucket, I should have the I'm going to bookmark this because we've been using so many times. So here, my example bucket
has the index to HMO file. You cannot actually read
the file here because S three is a location where you can copy and store the file, but there's no way of reading the content of the file because a file could be of anything, so ST doesn't allow
you to do that. Now that you have copied
the file across, you know, from your Linux system
to the S three bucket, we should be good to create
a website out of this. Now as we are going to access this ST bucket
through a website, now we need to enable
this ST bucket to accept permissions for it to be browsed from the Internet. By default, ST bucket is created with block all
public IP address, um, so we need to disable that to allow Cloud friend
to serve the content from, uh, this STR bucket
to the public. So to do that, you can click
on permission over here. And within that permission, you will see block public
access bucket setting. So it's currently on. So which means it is actually blocking all public
access to this bucket. So click on Edit
over here and then disable this and you can
click on Save button. So it's conform
and then click on Enta I think I got
that spelling of. Click on Confirm and
this should allow you to access your bucket
from the Internet. You CloudFront will be able to use this
information and we'll be able to project this particular file
through the Internet. So now let's go to
CloudFront over here. Let's close all
this and then open CloudFront So now that
CloudFront is open. Now you can click on Cloud
create a Cloud distribution. Now when you click on
CloudFront distribution, you can select the
origin of your website. So it can be from any of this action item
which is over here, it can be from a
load balancer if you're pointing your
application to a EC two. Load balancer will point
to your application on EC two and EC two will point to your application
which is running on it. But in this situation,
we're going to use it from S three bucket. So this is the one
and from here, the origin path over here, we can actually ignore this. Right now, we don't have to give an origin path over here. If you have a name over here, which is, you know, something which you can
use as a website name, and that's the name you
can enter over here. There are certain aspects
you cannot change. Like, for example, this is going to be a
cloud friend name, so pretty much you can
have anything you want. But your domain name would be something like this over here. Origin path is basically if you want to give a
specific path after this, R we don't need it right now. So we just go with public. So public, meaning
anyone can access it. If you have any custom
head or information, you can give it over here. So now this is going to
go with Origin Shield. Origin Shield adds
additional caching layer, which helps reduce the load on the origin that helps
protect its availability. And now here we have
the cache behavior, so this is the
cache setting here. Viewer protocol
is HTTP and HTTPS or redirect HTTP to HTTPS.
I would select this. If someone access HTTP website, you will be displayed
HGTV website separately, and the same control will be displayed on HTTPS
website as well. But it is always
highly recommended to have only HTTPS browsing. But here, if you
select HTPS Oly, the people who is trying with HTTP on your URL will
not able to access it. So it's always have a
redirect from HGTP to HGTPS Allow method get and head. So this is going to be getting
your request or your or, you know, information head
information over here. Now, it's not recommended to have all this
information because post information means it can also push information
to your service. That's not restricted. Restricted access view. This is basically for
your Cookie settings, and then you have some Cookie
settings over here as well, the cache policy optimization, we're going to go
with the default one which is recommended for ST. We're going to have the default ones
over here as well. Now you have so
many settings here. So here you have
the vA protection. So we are enabling
VAF protection over here for any kind of
attack to our website. And then here in the setting, use all the edge location, best performance, use
North America and Europe. So this will be prioritizing
your information. So based on that the pricing
would be dependent on. If we're going to go
for all edge location irrespective of the location, every edge location is
going to pick it up. So if you click on the
pricing information, CloudFormation, sorry,
CloudFront pricing. Now, you will see that there
is a free di over here, one TB of data transferred out of the Internet per month is free or 10,000 HTTB or HTTPS requests per
month or two um, I think it's 2 million
Cloudfront functions in vocation per month, and then free SSL certificate, no limitations, all
features are Ailbu. If you want to understand
the complete pricing, if you go beyond the
definitive value over here, you can see cloud
from one TB retire. It's already giving
you one TB retire. Now here, you can actually
look for pricing over here, how much amount of data transfer
you want to looking for. It's one GB per month, data transfer, one
GB per month again. Number of request
is going to be um, for one GB data would
mean 5,000 requests, which is like in
this calculation, you will see that
how much you're going to be charged, you're
going to be charged. The first, it's going
to be 0.09 USD. It's pretty cheap and the total monthly cost
is about 0.12 USD. This is pretty much your
United States calculation, but then you can
actually, you know, check it for different locations as well and find the calculation for different
locations over here. You also have a security
savings bundle over here, which can actually
you can use that to save by reading
through the FAQ. So you can save up to
30 percentage from, uh, you know, per month
on the cloud front. But anyway, ways we are going to use the basic item over here. So here the HTDV version. So if you have a
custom certificate, SSL certificate, you can
choose it from here, but we're going to
use the default one. Here's the protocol version. We're going to go
with the default ones over here and standard logs Which talks about
the viewer request, who's viewing it and
you can turn it on and which means that you can put it on a
bucket over here. So we will put it at the
test bucket over here. Invalid bucket for standardized
logging enable ACO. Likewise, you don't need it right now because
it gets complicated. Click on distribution,
and this should enable you to create a
CloudFront distribution. Now, the CloudFront distribution
is going to enable you to create a website
and have that hosted. And you can actually see that's currently in the
deploying phase over here. It says the security tab. I, you know, it's creating this share your opinion,
asking for feedback. Let's go ahead over here and see the progression of this
in the deployment state. They're still deploying, so
it may take a bit of time. I'm just looking for any
kind of events over here which you can add custom
error pages as well. You can add behaviors, origins over here, security. If there's any kind of security relator alerts
or something like that, you can actually see that. You can also check the metrics, your distribution
metrics over here. There's no data because I guess, it's still going on over here in terms of distribution
of deploying. Let me pause the video and
pause it once this completes. So guys, just started, you can see that deployment
is completed and you see the date of the deployment
which is completed. When you click on this and you will actually see the
requirements over here as in the details of general details of this
particular distribution. You can see security,
you can see origin. The origin will actually have, the S three bucket details over here and that's
basically your origin. So if you want to browse
this application, you can use this distribution
to my name over here, but it is still last modified as deploying
at this moment. So if you just
access this website, you can see currently it is
coming with Access tonight. Now, this pretty much sums up
that it is still deploying, so we're going to wait
for some more time. So it's already been
like ten to 15 minutes, so we may have to wait
for a few more minutes, and then I think it should
be getting deployed. Um So now do remember that you can edit this
particular action item. I think I did something
wrong, I guess here. That's why it went back to deploying because
last time when we saw that it was actually giving
the timelines over here, I don't know what I
did wrong over here, but it went back to deploying, give me some time
so let it sort out. Now, do remember that
you can actually mention the index toga as your
root page as well, to have the index m
as your root page. Here in the settings over here, you have the default
root document. Here you can go ahead and
edit it and put this, um, index to HGML over here
and you can save the changes. Now, maybe I did that be
the reason? I don't know. If you just access this index
dot HGML here, let's see. You still getting this access did exception. Just
give me a second. Now, this error could mean that the S three
is not accessible, S three object, which is indexed to
STMO's not accessible. There's access Dinit
coming from it. Now, how are we going to fix it? Firstly, we will
open S three and we will try to browse
the S three and see if the ST is actually
accessible so that we can understand if
this is a problem with Cloud front or ST itself. Let's open this S three
file over here and you will actually get a URL over here to browse this
file in a public way. We have enabled we
have disabled this. Let's refresh this one. When I do Control refresh, you can see that we're getting
the same result over here. The previous one was
coming from cache. When you do a Control refresh, hold control and press F five, it will refresh the
cache and it will ask you to get the new results. You are getting this access
in at S three level. So for you to fix this
cloud front issue, the Cloud front
seems to be fine. It is deploying the application, but the problem is like this exception is
coming from S three end. So to fix this exception, what you can do is
like first understand the permission of your bucket.
A little more enclosed. So what we have
done is, you know, we have disabled the blog
called public access, but still we have not given public access
to Access How bucket. So as you can come
down over here, we have access control list. Now, when you read the access
control list over here, this basically grants
bucket owner access to read and sorry, list and write objects and
read and write bucket ACLs. Bucket ACLs, you know, something which you
can do as a owner, that's obviously
you cannot do that. But for public access, you can see that that's pretty much not
available over here. Now, this is where
it gets complicated. Now we just basically have to edit this and enable
the public access, but the problem is
like edit is disabled. When you see the information, this bucket has the
bucket owner enforcing setting applied for
object ownership. So basically, it applies
in the topper level. So if you want to
modify, you know, specific requirements for
this particular bucket, you have to change
the object ownership over here by clicking on Edit here and change this to
disabled ACL to enable ACO. So you have to acknowledge
that you're enabling ACR for a specific bucket and then
click on Save Changes. Now when you come down
to the same option, you will see the
edit button over here. Click on Edit button. Now you have Object
and bucket ratio. So for everyone, public access, we can list the objects
which is in the bucket, but we cannot bucket
Co cannot be read. Okay? So we can only list
the access to objects. So just click on, I
understand. Save the changes. But it only applies for those which is going to be created
later point of time, but this one is already there, so we may need to do the setting for itself
Guru permission over here, and you have the similar
set of permission which is like whenever
this was created, it is the same
permission at that time, just click on Edit over here
and read on objects selects, so people can read the objects which is
in this public bucket, and I say, understand, and then I save changes. Now when you brows this
URL, see if it works. Now you can see that this is the same exact content
of that particular file. So that content is working. Close this and
pretty much refresh this and you can see that this content is coming
from S three bucket now. Now this content is basically hosted on your Cloud friend. You can see the CloudFront URL, which I have taken it from
here from this CloudFront URO, then I put it across over here. Now when I try to this is
the bucket information, S three bucket information. If you just put HTDP over
here, you can see that. You want to disable this
information, very simple. There are two options for you. You can change the
bucket information by going to bucket permission, and then you can edit this and you can remove this,
save the changes. Now, this particular information will not come.
It's still coming. So that's pretty much, you know, I thought it would disable the public access towards
object, but still, I think it's a
general information showing you the content
of what it has. So that's fine, I guess. So here you have the information saying that there
is index to file. I told you there's
another way, right? So go over here into the
root object and edit this root object and say root object should point
to index dot HGMO. So this is also another
way you can disable this. But I guess it will deploy
these changes right now. So let's wait for the
changes to deploy. It may take a couple of minutes. Or to get deployed. Now I think the
deployment is completed, I guess, it says deployment. But if you just
type the website, you can see that you don't
have the access which you had before and you are directly getting the index to,
which is the front file. So here, in this way we
can avoid that information being printed on the home page of your CloudFront website. So now you can see how
many times I refresh it. I get this page without
even typing it. So if you're typing HTTP
also, there's no problem, it gets redirected to HTPS and you get the index
to GM of content. This is because I
have just enabled Index H GML as the
default root object. In this case, every time I access this website or this
particular domin name, I will get the content
from Index H GMO. This is how you can
host it on CloudFront. Thank you again for watching this video. It was
useful for you. Let me go ahead and delete
this particular Cloud front because we don't want
this to be running because it's going to
be taking much of, you know, charges from AWS, so let's go ahead and have
this undeployed disabled. And then once it is disabled, you will have the action to delete this particular,
you know, instance. So I think it's still
operational, I guess. Yeah. So yeah, just
give it some time and there'll be option for you to delete this. Thank you again. Is sume the next video.
57. Labs Describing when to use multiple Regions: Hi, guys, welcome back
to the next video. In this video, we are going
to talk about when to use multiple AWS regions. Now this is one of the
important questions over here about it's asking you when to
use multiple AWS regions. So here we are going to
cover disaster recovery, business continuity, low latency for end user,
and data sovereignty. So these are some of the
items we're going to look as part of this
particular video. Now do remember that
for the low latency, I have created a video at
the end of the session. Well, I complete the
next action item. In the end of the
session, I've created a hands on video in
which you can try out the latency hands on to understand a bit
more about latency. That's the only thing which I can actually show you in terms of all these action items because more or less
disaster recovery, business continuity and
data sovereignty is all about theory based and there's no actual way
of implementing it. But low latency is something which I can
show you as a hands on. But not to disturb
this course overview, I put that video on the end
of this particular section. Let's go ahead into this
video and we will understand. Firstly, we'll start
with disaster recovery. Now, using multiple AWS regions can provide
significant benefits. In the previous VDO, we
have seen about working with multiple availability
zones, all right? Availability zones
actually provide you a resilience towards your application
and it gives you high availability to
your application. But what if you need
more than that? For example, if
you see the model, of the global infrastructure. So right now we are over here. We have deployed our
application over here in terms of the
Northern Virginia, I'm sorry, Northern Virginia is where we have deployed
our application. So perfect. So now, my customers are currently
located in the Americas, like in the northern part of the Americas mostly
in this location. But suddenly I am getting request from multiple
people from India location. Now, in this situation, now I need to put
a server across in this location
because every time, you know, people access it, they have to come through here and then access
it to that region. So that becomes, you know, a lot of latency
over here coming from India to the
Americas, right? So what I do is, like,
I want to implement it on region basis of my uh, you know, application which
I'm hosting over here. So what I do is
plan for a service, a load balancer which
can actually, you know, load balances the
service between the American region and the Indian region and host
my service on Mumbai region. And the first question is, like, can we get this
done? Yes, we can. Now, that's where we are
coming into this topic of multi region for AWS. Here, using multiple
AWS region can provide significant benefits in terms of availability, performance,
and compliance. Now, here are some of
the scenarios that you can work with multiple regions. Now, disaster recovery is one of the important things where
the purpose is to ensure that the infrastructure and
application can reco quickly and reliably from failures such
as single region failure. This is region based failure where it could be
a cyber attack, which has happened
in Arabs region. So that could actually
impact multiple regions, sorry, multiple availability
zones in a region. So this is strategy by replicating infrastructure
across multiple region, which can switch to a
secondary region in case of regional failure or any kind of attack on a specific region. Now, if I want to
do the same thing, I can also if I have
customers only in the US, I can have my application hosted at the Northern Virginia, USES one and I can
also host it on Ohio Espo just to have
a redundancy over here and I can just
turn off the instances until if there is any
disaster happening over here, then I can switch
to the instances. Now, do remember that you can configure for
some of the services, there is a uh option
of doing that as well. Now in terms of
business continuity, the purpose is to maintain uninterrupted business
operation during outages or disasters and ensuring the service
remain accessible. Now, the strategy is to
use region to replicate critical infrastructure and data so that if one region goes down, operation can seamlessly
continue from another region. Now the next
important thing about the low latency for endcer. This is what I was
telling you on the starting of the
video about low latency. Now here, the
purpose is to ensure that the latency
between the end user and your application
by deploying resource closer to your
geographical location. Distributing applications
across multiple regions, each close two key user
base to ensure that the faster response time and an improved user experience
is much recommended. The next one is about data
sovereignty and compliance. Now here, data is so
important and to comply with the local data laws
and regulations that require sensitive
data to be stored in a specific
geographical boundaries. For example, um, a data, which is about the user
information of a specific region, for their name and
ethnicity and, you know, their social
security number or any kind of information which
you have towards that data, needs to be, you know, compliant by the
regulatory requirements in that specific region because
every region will have specific requirements like GDPR. So likewise, uh, you know, you need to comply with
those information. That is also something
which you can do on a region basis. So if you're working
with UAE, for example, then you can have your own UA
data center in that region, and that region will hold all the UA related information
in that region only. So this is just an example. So this is pretty much, um, the reasons why or
benefits or when to use multi region in
terms of working with AWS. Thank you again for
watching this video. I'll see you in the next one.
58. Labs Edge Location Introduction : Hi, guys, welcome back
to the next video. Let's talk about edge location. As you can see in the list view, you see there are these edge
locations around the world. What are edge locations and why these many edge
locations are there. So to understand about
the edge locations, so you can see that on
each of this region, as in North America, you have these many
edge location. In Europe, you have these
many edge location, and in terms of South America, you have these many
edge locations. And in Asia Pacific, you have this many H locations. Now, we know that we
have two, you know, we have three availability
zone in India, you can see Hydrobad
you have three, you have Mumbai, you have three. So total six available
ty zones in India. And you can see that,
in Brazil, right? So you have you have
one available sorry, you have three
availability zone in Brazil in South America. And respective, you have
other edge locations as well. Now, even though you have
three available zones, but you still have these
many edge locations. Now what is edge location and why there are so many
edge locations available? Now, edge locations
are AWS data center that serves content to the
end user with low latency. Now, they are part of AWS
content delivery network, it's called a CDN. And what they deliver
is that they cache. They cache all the things
which you guys do. And then, when you
refresh the page, doesn't come from the server, but it comes from
the cache because the cache is kept on the
content delivery network, which is these edge locations. So now as you can see that when you are hosting any kind of content on Brazil and your customer location
is at Brazil. So you will be
naturally choosing your region as Basel and
then you will be hosting, you know, your
application on Brazil. Okay. So when you do that, when you host your
application on Brazil and you will have your
EC two instance and everything created on
Brazil because you are serving to the customer
who is in this location. When you're serving
to the customer who's in this location, which is Brazil, now, all your content would come from this particular
data center, correct? But there is this, you know, in the South,
you have Bogota. So it's another place in
South America, um, and, um, I think it's part of, as you can see, this
is part of Colombia. And there are other places, when you talk about
Brazil, people, not just in Brazil, but then people from Bolivia,
Peru, Colombia, Venezuela. So they will be actually
trying to access your website, which is available
or your application, which is available in Brizo. Now, when you do that, you
kind of need a cache location, you know, around this. Oh, okay, so, okay, so this is the
actual country name. Okay, I didn't realize it. So this is Buenos
Aires, Argentina. You have some other place from Brazil Las Bo and then Peru
over here, and then Chile. Likewise, as I told you, right, people would be browsing
application from different locations like
Argentina, Bolivia, Peru. I just showed you
this is Columbia. What they have done is they
will hire a small place where you have this
couple of services, which is from AWS called Content Delivery
Network or AWS CDN, as well as Amazon CloudFront. They will use Amazon
CloudFront and they will put these two services in different location over here. So when people browse your application
from this location, so the cache would happen
in this location itself. And it doesn't have to travel, you know, your webset
request doesn't have to travel all this way. You can also control
the cache level, as in like you can control it every 24 hours where you
can clear the cache. So the first, you know, after 24 hours, the cache
gets clear automatically, and the first request
will go fetch it from the actual server
which is hosted in Brazil. And then once it brings
the content down, right, it gets
loaded in the CDN. And CDN will then start giving the content from
your local source, which is your content
delivery network over here. Now, in this method, you content will be much faster, it will receive much faster. And yes, it is
cached information, but still it will
be much faster. But do remember that
the cache can be turned on or turned off based on the website
you're dealing with. So for example, if
you want to check your bank application to check the amount of
money in your account. Now, cache content is useless
because it cannot show you, you know, the repeated
information for 24 hours. So it needs to go fetch the latest information
from your bank. So which means that
in that situation, there is no cache, and there is no use of your
content delivery network. But if it's a ecommerce website where for the next 24 hours, the sales are not
going to be changed. So it's going to be easy
if you hold the images of that particular product in the content delivery network and then get that product
image downloaded. Because if you take the whole
page right on a website, the the items which is taking more time to load
is your images, correct? So if your images
can load faster, the data can be updated
from the database. So that's why they normally
maintain a separate URL for images and a separate
URL for your content. So that when you cache, the cache will hold
the images and all those big videos
and stuff like that, so it will be much faster for you to download
the video on play. Because someone in
your area would have already
downloaded that video, that video will be available
on the cache already. So rather than downloading
it from a far location, it will download it from
the nearest location because it is cached already. This is the reason
of an edge location. This is the example
I want to give you and on the next video, we will see the relationship between all these
three so that you can understand the
relationship between because this question talks about the relationship as well. So we will conclude that with the next
video where we talk about the relationship between region availabT
zones and location. Thank you again. I'll see
you on the next video.
59. Labs Peering Connection Configuration: Hey, guys. Welcome back
to the next video. In this video, I'm
going to tell you how I ping the server on
another region. This is an extra video, if you want to watch to
now how to configure peer communication to ping from one Virtual machine to
another virtual machine, which is on a different region. So open two ECT console I'm just middle clicking
ECT console two times, and then we PC two
times as well. I'm just splitting the EC two
console to the right side. I have EC two console, BPC, VPC and E or else you can make
it easy two and VPC again. A is on the same
region right now as you can see over here in
the highlight over here, it's all in the USC one. I'll open this 21
EC two instance and VPC for North Virginia. And then I will change
this to Mumbai region. So it doesn't have
to be Mumbai region. It could be any other
region for that matter. So now the sessions which
you're going to do over here will actually continue
for umbiRgion over here, and this VPC will also
reflect Mumbai region. So you need to do two
sets of configuration. Firstly, you need
to have instance running and it needs to have
IP address assigned to it. Now, IP address series
basically based on the subnet of the VPC
you're assigning to. When you're creating
any kind of server, you will be actually creating
a server in a specific uh, VPN, you know, sorry, VPC, and that BPC will be
specified at the time of creating or
launching the instance. Now you have the
default VPC over here, which is currently running. This is the default VPC. These are the default subnets. A number of subnets would be
created automatically for each region and these are the
number of avaiablet zones. Based on the number
of available zones, you will have the
number of you can see that availability
zone over here, and you will have the basic
number of subnets here. That's the default
configuration. The default, there will
be no PR connection, so you should not
find this also. I've just edited the one
which I used for creation. So there'll be Internet gateway, there will be routing table. Routing table is
how you are able to access the Internet because you are using Internet
gateway over here. Okay, Internet gateway is
for making sure that the VPC is connected to
your local network as well as your
Internet as well. So this is the IPtrss series. If you see the subnet you will have the cider
information over here. So as you have six of them, so you're giving unique, but if you see this first
two doesn't change 17231, and then for different, availability zone,
you are giving eight, it starts from zero and then 16. 32 and then 48 and
then 64 and 80. This gives quite the number of available IP addresses
for you to assign. Do remember that this VPC and this IP address is
for your local network, which means your system, which means using this one, you can assign 491 systems
with one IP address each. That's what this means. This is the number of instances
you can create available IP address tells
you what is the number of IPaddre available
for each instance. So this is pretty
much the default one. We're not done any changes
to the default one. Though we have the default one, we will be actually
using a customized one. We will talk about that. As we are communicating from your North Virginia
to Mumbai region. Firstly, you need to
create a EC two instance. But the problem if you create a EC two instance right now, you will get exception
later point of time because when you're
doing peer connection, it will not work. If you see that you create a
peer connection over here, create a peer connection, and here, just give
a random name. So US two I&D. So this is basically
United States to India. And here, the VPC
is the D four VPC, I'm picking it up, which has the cider information of 17231. Do remember that
the VPC which gets created by default on all regions will use the
same cider information. So if you click on the cider
information over here, you will see that this
is manually added. I'll remove what
I manually added. So just remove this. So you will have
this only one cider. So if you see the cider, 17231 is matching
the cider over here. So if I try to create it with
the same cider information, so I just have to
copy this VPC ID over here and select,
it should not be. So here you select from,
here you select two. So select another
VPC to peer width. So here, select another
region and give Mumbai and give the VPC ID. Now, this VPC ID will not work. I'll tell you, I'll
show you. Just click on Create PR connection. Once you click on that, go
back to per connection again, and you will see it's failed. When you click on this to understand the reason it failed, it failed due to
either VPC ID is incorrect or count ID is
incorrect or overlapping cider. In our case, it's overlapping
cider because the one we are trying to connect from and connect to is on
the same network. So you can see the 17231. And here also in the
VPC, it is 17231. In one side, it
should be different. Okay. So I'm going to use
the default one on this particular um, Northern
Virginia region. And here I'm going
to create a new VPC, just click on Create VPC then VPC and then the name of
it is going to be um, latency test VPC, IP
version for manual input, and I've given 17232 dot
zero dot zero slash 16. Now, if you see the compare
this, this is on 31. I'm giving 32 over here. So this is going to be
an unique IP address and tenancy is default. Do not select dedicated. It will be charging you more. And do remember this before creating VPC, VPCs are charged. So if you're going to
create more than one VPC, it's going to be
charged for you, but not so much so if you see the VPC over here and
the VPC over here, there is something missing. That is your subnet.
If you see that, there are subnets over
here attached to routing table and then to your
Internet gateway. Right now, there is no there is a routing table here
which is created by Defa but there is no
subnet over here. There is no Internet
connection also here. So we need to do
some activity here, firstly, to create
subnet. So go to subnet. Now, you already have three
subnets which is already assigned to VPC here, so
you cannot do anything. So one subnt can be
assigned to one VPC. You can see that it is
assigned to 31 series IPR. So we are using a
new VPC itself. So we will be changing
this VPC name. Okay, so create a
subnet, select the VPC. So you see the latency test VPC. This is the name we
have given, right? And it also shows
17232 over here. So this is what we need.
Here, give a sub net name. Now first try to understand how the sub net names
are configured. Middle clicking on
subnet over here, and then I'm selecting this looking at the subnet
availability zone name. So I have this one B, one C, one A. I can understand
there are three, so it should be one B, one A. So first I'll
configure one subnet here and then select
one A over here. And then you can see that
the cider block over here, so it should be the
same IPdresO 32 Sorry. Dot zero dot zero slash 16. It should be the same
for the first one, but you can add another
subnet here itself. It's not needed that you
should add all three items. I don't have to add three. You can add only two also
that's completely okay. But at least two should be
there as part of a VPC. So for that to low balance and, you know, fail over
fall tolerant. You cannot actually give
this IP address again. If you're going to
give the same IP address here and here, what's happening is
that the cider block the Estin you are
creating for this one as well for it to select this particular
availability zone it will use the same
IP address series, which has some 65,000 IP addresses if you
choose this one. But the problem is like this also using the
same IP address. So it will become a problem. It will not accept it. If you click on the create, it'll say address overlap.
You have to change this. You can use this bar and you can change this to the
next available one. That's weird. It
normally changes, so I can go to 16 over here. I don't know why it's not
working for some reason. But you can change it like that. If you see the previous one also, it will say
the same thing. So 310, 311-63-1302. For each cider, you have to give extension
of this IP address so that you will have the specific
amount of IPddress available. So now I'm just giving two of
this and changed it to 16. Now Clic on create subnet
should actually work. Let's see what is
the issue here. I think I have to
give 22, I guess. Let me confirm that
just a second. Yeah, I have to give
20 over here, not 16. Sorry. 20 over here, 20 over here as Bob. That why it was not changing. So if it goes to default, I think now it will change. I can see that changing 0-16. Sorry about that mistake. Now you see that 4,096 IP Rs available because
I've given 16 didn't work. The range is so huge, you saw that it was coming
at 65,000 IPRs we can use. That's why it was
not working for me. Now it's fixed nutricons create subnet and you have two
subnets created over here. So on a whole, you have uh, three subnets for three
available t zones in Mumbai region
and two of them. Now that we have completed the subnet creation and
assigned it to VPC. Now if we go to VPC,
select the VPC, you will see resource map. You have the subnet
configured to a route table, but we have not configured
the route table properly. But before you go
to route table, configure the Internet
gateway or else your service, which is hosted under this VPC will not
reach to the Internet. So you can go to
Internet Gateway, create a new Internet gateway. And then latency test IGW I'm giving as
Internet gateway name. Now in this Internet gateway, you need to do a configuration. You have to attach VPC
for Internet gateway. Select the VPC, latency
test VPC and click attach. Now, if you go to VPC and
check the resource mapping, you will see this
network configuration. But what you won't see
is the attaching of this route table and
the Internet gateway. If you see this one over here, you see that it's
attaching over here. So there's a communication. So to get that attached,
go to Route Table. I already has a route table, so you can actually
see it from here. It says route table. You can just click on this open the route table, then you tab. Now you can see that this is
a default route table which is assigned to this
particular latency Test VPC. Now, this is the
default configuration which comes with
every route table. This tells you about
the 17232 series. Now, click on Edit route table. In this route table, you need to specify what
you're routing it. Firstly, Click on. Do not modify the local
one, let it be like that. Let's click on Add
route over here and search for the
Internet facing one. What do you want to
attach it with Internet? Everything all the IP address, which is in this particular VPC, should have Internet
gateway access. Select Internet
gateway over here and give the Internet gateway
which we just created. Now what happens is
this will get tagged. So this route table will know about this Internet gateway
which we have created. Now when you go to
VPC, select the VPC. If you go to resource maps, you will see that there is a connection over
here right now. So anything any any kind of service EC two instance
you're going to create now will connect to
this Internet gateway. If you have selected this VPC as part of the EC two instance. Now you will have two options
in selection towards VPC. I need to select this VPC
if I want to use this, um, you know, pairing
connection because it will use this
series of IP address. So if you create,
I'll show you that, I'll just quickly show you, um, so right now, I'm
on the VPC here. I'm on the EC two instance here. There's no instance running
as one which is terminated. Click on Launch Instance and then just do a
test one instance. Test one instance will
use the default VPC. Default VPC is not
what we configured, so you can see the default VPC. Now when I just select this
default one over here, again, you will need to create a new key pair here for
this particular region. Keypair is completely different. You need to create
another key pair over here and you need
to download that. Remember to choose PPK if you are planning
to access the server, but I'm not going to
access the server, so I'm just going
to do it as a demo. Now when I create this server, its server is running state, but you see the IP
address, it's 17231. I cannot do the pairing when it has the same
IP address range. It needs to have a
different IP address range. For that, I just have to
terminate this instance again. This time, I need
to launch instance. Same thing, latency test. Then this time I'm
going to click on. I've selected web logic
as a key pen name. I'm going to edit this
network setting and change this to the
latency test VPC. Now this time, you
can see that it is selecting either APA or B, you can select whichever
availabil zone you want. And then here at to assign
IP address, enable it. This will basically means that it will give you a
publiciPaddress. Click on Launch Instance, it's going to create
a instance over here. Now, this instance is going
to be running very soon. So just click onda fresh. Now you can see
that it's running. Now, this instance is
running on 17232, one. This is what we
wanted to have that. Now that you're running on this IPaddress so you can
actually copy this IPaddress. Try to ping it from the
server from North Virginia, but it wouldn't ping because we have not done one configuration. That is a peering configuration. We have not done paring. So now let's go ahead and do
the pairing configuration. Now to do the pairing
configuration, you just have to go to any of the service,
both of them is fine. So I would recommend go to North Virginia VPC once you're
in the North Virginia VPC, there's a pairing
connection over here. Just click on Peering
connection and then create new
peering connection. Select the pairing
connection name, whatever you choose, and then
select this particular VPC, select another region
as the destination, select Mumbai region or whatever the region
you have configured. New VPC, copy the new VPC ID, which is hosted on 17232. Because this is using a
different IP address series. Now, click on Create pairing. Now go back to
peering connection. Now you can see that
initialized request so this output is different. Now it has changed to
pending acceptance. Now if you just come
back to your Mumbai VPC, if you go to peering
connections here. If you just refresh this, you can see that
pending connection and there's information here. You can accept or reject the pending connection request
another action menu. Or time today to do so. So go over here,
action, accept request. Now this is going to accept this request and
it's going to create the cider communication from 17231 series to 17232 series. Accept the request. Now,
let's see if it's provisions. And you can see that it
is active right now. So you have the source, request a cider, accepted
CIDR over here, right? Now, one more thing you need
to do before you access it. I don't think it's
going to work now also. The one more thing you need
to do is you need to set at the EC two level of your
um, security group. So security group needs to be updated because currently it only accept 22 port number
towards all sources, right? So you need to tell it to accept the ICMP ping also because
without the ICMP ping, it cannot ping itself
also because it doesn't allow the inbound
of ICMP anywhere. ICMP is your ping request. Now click on Edit inbound rules and then select ICMP from here, IP version four, and
then you just give all. Then click on save
rule and then this should allow ICMP rules
as inbound connection. I think I've replaced as such. Don't do that. You have to add a new rule by clicking
or adding a new rule. Sorry about that. I
think I've replaced. But it doesn't matter
for our training, uh, it's completely okay. You still don't
have access to it. Let me confirm if this
is the IP address, which I'm running
is correct or not because IP address would have changed if you
created another one. It looks like it's the same
one, give me a second. Okay. One more thing
we need to do is that. So now that you've activated your pairing
connection, right, your route table on
the pairing connection is empty. You see that? So there is no way that the
routing happens through VPC. So you need to create
a route table, and when you look at
this route table, you should be part of the
existing route table, which is part of your VPC. So if you go to your VPC, go to Resource mapping, you have the route table here, click on this, open
another window. In this window, just like you
configure Internet gateway, you need to edit
route and add a rule. In this rule, you
cannot actually repeat the same instance over here. It won't allow you to
repeat the same instance. I just try that then give
pairing connection over here. This is the pairing connection. So it is telling you
which connections to prear and you actually get
the suggestion over here. This is the one which is
currently be um, you know, used. So select that pairing
connection and save the changes. Now, it was saying that
it is already existing. So that's what I
said that you cannot actually use the same
IP address series. So you can give
0.0.0.16 as well. So just click on Save and
you can actually see that. So you can give
alternatively uh, this one instead of this, you can also give the
remote address as well. So you can give 17231
dot zero dot zero slash 16 but if you give
0.0.0.16 is also the same. It means everything on that series on that particular side. This is also you can
give and click O Save. Now, if you go back to peering connection and select
this pairing connection, go to Route Table, you will see the route information over here. This pretty much tells you that the route table is created. Now let's go ahead
and see if it pins. Okay, just give me a second. So the last thing we
are supposed to do to get that working is as we have completed the
route table addition in the destination server, which is Mumbai, now you need to do the same route table
addition for, you know, the PCX addition as in, like, per connection addition on
your source server as well. So here we have given the
source server information, which is your source
cider information and the PCX on the
source server. Now you have to give the same thing over
here on this routing table. Now go to VPC, we are
using the default VPC, go to Resource map,
and you will get the shortcut over here
for your route table. If you go in this method, you won't be wrong because sometimes there will be huge routing tables, you
don't know which one it is. If you figure out the VPC
and you see the map and you will actually get to the actual routing table,
the correct one. Then within the routing table, you are having this routes
over here, edit this route. And then add a
rule and then give the IP address of the other end. It's 32320 dot
zero dot slash 16. If you look at this, take
an example over here. Here you have inserted the
peer connection with 31, that is your source server. Now here, you have to give the destination
server over here and give peer connection again and use the peer connection ID. So you can also see that
it is saying US IND. This is the peer connection
ID on the remote end. So that is something
which you're picking up and then saving the changes. Now after saving the changes, you can just go
ahead and try to pin the IP address and you will see a response from the server. So which means that you
are getting a response from this particular
server instance. So this is how you have
to use cross region pairing connection
so that you can reach out to a server
on a remote host. Now you can do SSSarH
also to the remote host. You just have to enable it on the service security
group, sorry. So once you're in
the security group, added security group,
add rule S such, and then you can mention
that 172 u one set 231 dot zero dot zero slash 16. 31 series, anyone
can do SS such. Now, if you just save
the rule over here, now, SSSuch is allowed from 31. I will do SSSuch don't
have the key though, so I'll not be able to log in, but you can see that this accepting the connection for me. I don't have the key, so I
will not be able to log in. But if I have the key, I can put the key over here and I can log in
easily because I have opened this connection
over here to this iPre series because
I am browsing it from North Virginia region to the server which is
hosted on Mumbai region. Once the peer is connected
and when you have added the route to the
peer, um, actually, when you have added
the peer to the route, it's going to be
very easy for you to configure a security group. Now also you can also create one security group
for this purpose. And then every time you
create a EC two mission, you use that security
group rather than the default one which gets created every
time you create a EC two because every time
you launch a EC two instance, you are going to get
a new security group, you have so many security groups because every time you
launch a EC two instance, it basically creates a new one. So to avoid that, you can
create a default one, and you can call
it as a default. And every time you create
a EC two instance, you can select that
particular security group. Okay. So this is just
a video to showcase how it's able to communicate
via another VPC. Now, do remember the VPC are charged based on
the number of VPCs. One VPC is going to be free. So if you're going to
have multiple VPC, it's going to be a problem. So firstly, remove the
pairing connection by action. Delet pairing connection. Direct route table as
well, select that option. So this will dirote
table on both ends, and then you can
see it's deleting. Once you have done that, go ahead and delete the
VPC, see if it allows you. No. It still needs the um, I think resource needs
to be terminated. So that's our uh
EC two instance. So just going to shut down our
EC two instance over here. So this is gonna save
some money for us. So the sooner you get it done, sooner you take it off, right? It's not going to be
charged because it could be 1 hour time at
any point of time, so it's better to take it off. So just remove the subnets. These are the extra subnet
I've created with the name D. So subnas is deleted. I think I can try
deleting with PC now. It may take a
little bit of time. Then let's remove the
route table as well. The drought table,
but it's not going, so it's the main route table. It may take a little
bit of time for this boot class suns
to get deleted. Dretke, let's try
to take it down. Okay. Now it's going down
once you delete the subnet. VPC is deleted now, and you should be good with
the route tables as well. So this will also get removed and your pairing connection
is also deleted. This will charge
you from getting charged or getting
extra bit of payment. So here, the pairing connection, if you delete in Mumbai region, it will dele it here also. So there's no need for you
to tell it separately. In terms of route, it will
remove the route as well. So subnet, we have not created any VPC, we
not created any. So that should undo all the work we have
done so far and it will stop you from getting
charged the next day morning. Thank you again for
watching this video. If you have any questions, leave it on the question section. I really appreciate
your feedback because by your feedback is how I'm going to make sure that I improve my work and improve
my training based on that. Thank you again.
I'll see you on the
60. Labs Regions Introduction: Hi, guys. Welcome back
to the next video. In this video, we are
going to understand what is the relationship
between regions, available T zone
and edge location. This is the first topic
in 3.2 task statements, so we can understand each
of these components, and then we are going to understand the
relationship between them. So to proceed further, I've given you the
world map over here. Let's just say that
we will also review the world map in terms of
AWS in their website itself. But let's just try to assume
that this is your world map. In this world map, one of the region in your world
map is US IP and he's one. This is the name of the region
which is customized by, you know, AWS to keep
it as a region name. Firstly, you'll understand
region on this video and then we'll go to availability zones and then the edge locations. So firstly, let's start with
region. What is region? Region is basically
on a general term, talks about a specific
location in your map. For example, America
is a region, and so I mean, in America, you have South America,
you have North America, which is again, region
classifications over there. So when Arabs wanted
to mention regions, so they wanted to make sure that now regions are
separated by names over here. Now in this name over here, you can see US East one. So what this means
is that it has the scope of going East
two and East three as well so that they can actually increase the number of
regions within this US East. Now you can see that US
as it's United States and you can see the East saying that it is
on the east side. So towards the New York side is basically called
the East side. West side is basically Los
Angeles and Las Vegas. Those are considered to be the west side or
west end of the US. And then the central regions
are basically Texas o, you know, which comes
in the central region. So, likewise, what they have
done is like in the East, they have made it one,
is the first location which the AWS has started
with their services with. So this is basically a region. Called USC's one, which
has multiple data centers. So the minimum number of data centers a region
should have is two. This is the minimum
requirement for AWS to have fault tolerance
between two data centers. This is the reason why whenever a new
region is coming up, they will make sure that
region has at least two or two minimum two is maintained as the data centers as
part of the region. Data centers is called as availability zone which we
will talk in the next video. Now let's go into the
global bill futures, go to Google and type
global infrastructure. Sorry about my throat.
So someone goes to AWS Global Infrastructure
and they will be actually going to this website, which is awstoramazon.com. And here you will see
global infrastructure. So once you're in the
global infrastructure here, you can see all this information like how many regions are there, Availability Zones are there, CloudFrind pops are there. Now, we will talk
in detail about each one of this, so
don't worry about it. But do remember that
um when you are talking about um, regions. The questions would not be about how many regions
are there in AWS. The questions
wouldn't be like that because regions keeps growing, so they will not be asking you, how many regions
are there in AWS. So those kind of, like, number of regions, number of available it zones would
not be the question. But they would be asking you, what are the minimum
availability zones would be part of a region. So this is a mandatory
requirement that, you know, that AWS fix itself
that it should have a minimum number of two
availability zones so that it can do
redundancies and fault tolerance in terms of a
data centers failure. Now, if you come down, you will see actually highlighted
this is North America. You can see this
is South America. You can see this is this
is of at East Asia, and then you have
West Asia over here. So Asia is divided
into two as in, like, it includes Europe
and as well as, like, your African continent, as
well as your Middle East. And then in terms of the, um, you know, I'm sorry, this is West Asia and
this is East Asia. Sorry. Sorry about that. If I've just confused you. So so you can see
that over here, you have regions over here, which is popping
up in blue color, I'm sorry, green color. And these are basically regions. The regions would have a certain set of
available T zones as you can see over here. So minimum I have seen is two, but I don't see
anything which two available Te Zone these days. Only I see major of them
is three available T zone. This particular region is in Brazil and this one
is in South Africa. And these two are in Australia, as you can see, and this one, they are building
up data, I mean, a region which has at
least two data centers, availability zones in Auckland. You have one in Jakarta, Indonesia, you
have in Singapore, you have in Malaysia, then there's one coming in Thailand, and you have one
coming in Taiwan. Taiwan is basically,
uh, you know, the Taipei I guess,
that should be the location where
it's going to come. And you have one in Hong Kong, and then you have three
in China, actually, two in China, one
in South Korea, Seoul, and you
have two in Japan. You have two in India, as well. There's nothing much
in Russia over here. This region is Russia, Mongolia is, like,
completely empty over here. Neither is these countries
like Afghanistan, Pakistan, and there's
nothing over there. Have one in Bahrain,
you have one in UAE and you have one
coming up in Saudi Arabia. So each region when
I highlighted, you can see the number
of availablety zone. You have one in Tel AV as
well, which is Israel, and you have one in Ireland, one in the United Kingdom. Now you can see
that United Kingdom also has three
availability zones. So more or less,
everything maintains three availability
zone as by default. And you can also see interesting thing called launch date. So when this availability
zone came into existence, so you can see Ireland one is much earlier than the
United Kingdom itself, because I bet it's because
of all the, you know, land acquisition and
building data center just a little expensive
task in the United States. Then maybe in Ireland. That could have been the reason why is it has come up late. Now, the only two
places where you have more availablet zone is
basically your US East one, which is your Northern Virginia, um, and this has six
availability zones. The other one is Oregon. So this one has about
four availability zones. Apart from this,
I have not seen, which has more availablety zone, but this may change
in the future. Now, um, is it better to have more than three
availability zone? No, it's not. If you have two availability zone,
that would be enough. Three is just to
break the balance. When you have two, if you have
three, breaks the balance. So that's why three is more
or less preferred everywhere, and that's the default amount. But two itself is
enough because you will have the primary
data center and you have the backup rotundancy
one in case the primary data center is
going to be having any issue. So these are the regions, and regions has a name. So when you go to
the AWS console, you will see all the
regions over here. Some are enable for
you, some are not. So if you can see that there are 13 regions is not
enable for my account, so I can just
middle click on it. Okay, so middle
click didn't work, so you just go to the bottom of this screen and then you
click on Managed regions. Okay, so now it's
going to load up this screen and you will see
the regions just disabled. Now you can actually
enable this region by selecting these regions and
clicking on Enable button. But I'm not using the region, so I'm just like
it to be disabled. And also those regions
which you're not using, you can also, you
know, disable them. But that's something you cannot
do with the default ones. So the default ones which
you have out there, it requires you to
have, you know, that needs to be enabled
that is enabled by default, which means that you cannot
actually disable them, but you can enable and disable these extra items if your
customer basis over here. So that's the reason why it
is not enabled by default, so that's something which you can enable later
point of time. But these are the region. Each region will have a name. So you can see that East one East two is then Westbound
West two is there. So you have East one, which is your Northern Virginia
and then Ohio. So these are two places
in your east side. And then in west side, you have the California and Northern California
and as well as Oregon. So these are two
locations which you have. So likewise, you have
different locations over here, like, uh Asia Pacific, you have Mumbai, which
is AP South one, and then HydrabadRgion will be South two, I
guess, over here. So yeah, AP South to
Hyderabad region. So this is on India. So likewise, they try to
use the number to keep increasing it so that they can have multiple regions
in multiple cities. So this will really
help them out to globalize and have
presence around the globe. Now, as I told you,
these are regions. So in the next video, we will understand what are
availability zones. Thank you again for
watching this video. I'll see you
61. Labs Wavelength, Local Zones, Latency Hands On: Hey, guys, welcome back
to the next video. In this video, we are going
to talk about latency. Now, what is latency? Latency is basically
the slowness which you experience when
you browse a website or open an application or when you are typing
something on the um, you know, system and then
you get a later response, you get a slower response
from the server. When you type
Enter, for example, you're doing a big
command over here. You're not listing the files like 10,000 files
in this folder. You just pressing enter. When you press the enter itself, you will see that it gets
slower on your response. So the smoother response is
basically when I hit Enter, it should be executed
immediately. But if there is a
milliseconds delay, you will not notice. If there is seconds delay, you will start noticing the
difference because, uh, your eye and your hand
coordination will be a little different from
what you have seen previously. So that is an experience
where you can see, um, you can see the
term called latency. Now, this latency comes
when I type this, okay? But there is a way of
actually measuring latency. For example, let's see, what it looks like when
you don't have latency. So I have prepared a diagram. Okay. So I've created a diagram, so I will slowly uncover this particular document
which you see out here. So first, it's like
intraeZ communication which is sub minus one or sub one millisecond
to two millisecond. The fastest
communication available on the same availability zone. Now, I can show you even faster than that when you're
paying local host. Loclos means itself. It's the loopback IP address. It's now called as a oculos by default as a host
name given to it. So when I ping the oculos, you can see the time here taken. So this is the fastest you
can ever see because it takes about 0.0
026 milliseconds. It's so fast that it is
able to reach itself. So that's perfect, right? But you will not be able
to see this kind of response in any other
network related stuff. Even if you ping this yourself. Like for example,
I go in over here, I've launched certain
EC two instances. I'm going to show you that. Let's call running instances. So I have three instances
running over here. I'm actually logged
into this at base CLI, and I copy the
private IP address. Okay, I'm not using the
public IP address as of now. I will use it later. When I pin the
private IP address of this particular
server to itself, you will see a small
level of latency here, which is almost same as
what you see over here. So it is all 26 over here. This is 27 and 28 is coming. Likewise, you see uh, the similar level of
latency because it is the same IP address
on the same system. Now, what if I pin another system which is on the same availability
zone. So you can see that. This is another system which
I created for latency test, and this is actually created on the same
availability zone. Okay? So you can modify the availabt zone
when you create it. I've also shown you already. So use that to, um, you know, put your
availability zone there. So it's very simple
launch instance. And then in the instance, you just edit this and then select which
availableilit zone you want. That way, it'd be very easy for you to select which
availability zone you want. Copy this, pink this one. This is the next server
which we have created, but it is on the same AZ. You here in this diagram, this actual diagram
starts from here, the intra AZ communication. Availability zone communication. I can see that it is coming
into one millisecond, so it's basically
not in 0.0, right, because it's not paying
into it sometimes it gets 0.9, right. But most of the
time is about 1.5, 1.6, 1.8, one point. So if you average it, it's going to be about
1.1 0.5 average. You can see that
average is 1.44. It has already
averaged over here. But here, average point 0.025. So it is pretty fast to
be very frank, okay? So because it is connecting
to a server right next to it. Okay? Now, that is the first one which you
have seen right now. The next one is like
cross availability zone. So I've created this server in another availability
zone of one F. Okay? So launch instance
and create it on one F. So this is another
availablet zone. So let's go ahead
and ping this one. Now, you can see that, if it is on a different
availablet zone, it's on a different data center. You see a little of this
takes a bit of extra time, but it is still at the average. So we have closed it at 16, so we'll close it
at 16 I suppose. You can see average has
a little bit increased over here from
one.44 to one.799. So which means that, it is taking a bit extra
time but not much of a difference in terms of
your average, time to live. I'm sorry, average time, it is getting a response
from the server. So this is basically tells you that the avaibt zone on the other side
is also fine, okay? Um now let's talk about region. This is an extra of configuration I did for
region configuration. So if I go to IFI
middle click the EC two and change this region
to Asia Pacific Mumbai. I've created a service
over here, I'm sorry, EC two instance over here, and this is running
on AP south one A. That's the availability zone. I have configured
the VPC route table and then configure
the communication between them. That's
why it's working. If you're thinking
it's not working for you, but just
by creating it, it wouldn't work because it involves a lot of
work on the back end. It's called peer to
peer communication. So now you have to think or see what actually is the output
you're going to get. Because this
particular IP address is still Aws own IP address, as you can see that I
matched it the same. I've given a different
series over here, 17231. This is 172, 32, or else it wouldn't
connect it will throw error if it's on the same
network range, same cider. So when I pin this, you can see there's a lot of
milliseconds over here. So it's 187
milliseconds over here. So this is called
the cross region, um, let it come at 16. Cross region AWS. Okay? So you can see that 187
milliseconds, it's average one. So you can actually
connect with one of your AWS service to another AW service is
hosted on another region. But the problem only
problem is like, uh, the communication here will be, um, you know, would be like, it has a lot of lag here. I'm sorry, um, yeah,
latency would be there. Sorry. I just go down a
little bit over here, cross A communication is one, two, uh, ten milliseconds. Intra AZ, we have checked it. Cross AZ, we have checked it. Region to region, as
you can see that it comes from 50 milliseconds
to 300 milliseconds. So we are still fine,
because we are 50-300. We didn't go to the
maximum of 300. Maybe maybe if I would have created a service on the corner, like maybe on the
Tokyo region or, um, in this region, maybe on China region or
could be in Australia. Then it may have
300 milliseconds, difference, it could be
that. So we don't know. So we have to try that.
Okay. Now the next thing is going to be edge
location, cloud front. Now, this is where it
takes a little bit of, you know, item from you. So here you have the edge
location cloud front. If you're going to
create Cloud front in this situation, yeah, then you kind of get a response with one millisecond
to 20 milliseconds. Now, this latency is reduced and the end user if that end user
is near to the cloud front. So that is also a condition of light because
we see that plan, you know, cloud friends
is not everywhere. Okay? So if I say that if
I see in India, right? So we check the edge
location, right? So in India, so in India, it's Pune there Mumbai
it's the New Delhi, their Chennai, the Bangalore there Arabad is there, right? So if someone is browsing from
not any of those location, they are browsing
from Kerala, I guess, from the other Southend, then they will be going
to Chennai CloudFront. And then firstly, they'll
go to the Mumbai server, get that resource for
the first time and then that is being cached in the
Cloudfront in Chennai region. But then they still will get the lag over here or
latency over here because Um, they will still have to go and go through Jen
because it's not like people browsing in
Jane would be much faster than compared with Kiera or bang loads
there over here. So likewise, if you go
to the south end of it, then you're going
to get a problem. Like, you still will have latency in that
perspective as well. But, you know, it's better
than coming from Mumbai right. So that latency is
what they have given over here in this
as 20 milliseconds. So one to 20 milliseconds. So it can be faster as one millisecond when you
are like when there's edge location and the end
user is closer to each other. But if they are far away, it can go to 20 milliseconds. So that's what we
have seen so far. And then in terms of local
zone and wavelength. These are two things I told you we will check
it later, right? Now, local zones are basically, I'll show you over here. Now, you will not be able to see local zones as a region name, name of the places like
how you see H location, but then you will able to
see local zones over here. Now let's understand what
is a local zone first. Now, wavelength and local
zones are both designed to extend AWS infrastructure
closer to end user and devices, but they serve slightly
different purpose and targets different use cases. First, let's understand
about the wavelength zone. Now, the wavelength if
it comes wavelength or someone says wavelength to
you or in the question, think about five G. Five
G is a wave length, so which travels through a wave. So an AWS wavelength is a specialized AWS
infrastructure that embeds AWS compute and storage service within telecommunication
provider, like the data center at the
edge of a five G network. This minimizes the
latency caused by routing traffic from five
G network to the Cloud. You use the telecommunication
five G network, to transfer the data to
cloud, which is much faster. You know the speed of
five G? It's really fast. Not a lot of places has
this wavelength zone. Now, wavelength zone is used for ultra low latency services. Wavelength is used for
ultra low latency, which means that
it's going to be very expensive because it uses five G. What kind of situations you would
use ultra low latency? Anonymous vehicle driving. If you look at in Los Angeles and other
part of California, you will see there'll be car going on the road
without a driver, right? If you've seen that, it uses the five G technology to
have ultra low latency. We're not talking about
low latency over here. If you see this
diagram over here, we're talking about
ultra low latency, which is lesser than
ten milliseconds. That is like as though you
are right next to the server. So here you see that cross AZ. Cross AZ is the one
which actually gave you one to ten milliseconds. What ultra ultra lens zones is promising is less
than ten millisecond, often less than
five milliseconds. So basically, it uses
the five G applications, which is given by your
um, data provider, telecommunication provider,
like if you're talking about AT&T or if you're
talking about Verizon. So likewise, they use
those network for a faster delivery of data with ultra low latency for
anonymous vehicles, AR VR as an argumented reality or virtual reality and a real
time gaming purpose, right? So likewise, you have different
situations where you're going to use your this
kind of a Waveland zone. So if you see the
global chart over here, you don't have so many
wavelengths in all the locations. So you have eight
Waveland zone in Northern Virginia and
you have some in Oregon. So now these Waveland zones
are basically concentrated on the region of California as well as New York New York
state, like is over here. Now, this is to ensure nowadays a lot of autonomous
driving cars are coming, Robo taxi is coming as well. So for those situations, it is used, and also AR VR. So here, you can see in London, you have also wave um, two wavelength zones are there. Now, this is going to be part of your availability zone as well, but it provides a critical infrastructure
for your application. Now, let's talk
about local zone. Well, when you want to
talk about local zone, local zone is not as fast as lower latency
as a wavelength zone, but it is still low when you talk about cross
AZ communication. Now, how you can make this, you know, low latency, by having a mobile
data center or having a data center which
is close to Metro cities. For example, if you
see Northern Virginia, you have 14 local
zones over here. Now, this 14 local zones will be divided into a lot of places. I'm pretty sure in the
New York, uh, you know, in New York strip, as well as in the Oregon regions Bo you can see seven local
zones over here. You don't have local
zones in terms of London. Okay, but you have
local zones over here. So what is local zone? As I told you, local zone is basically extending
your AWS service. So what are the
service extended? You compute storage, database. So these three
services are extended, and where it is extended, it's closer to ndcer by placing it in specific
geographical location. Which is very highly
populated areas, okay, very highly populated
areas wherever you have, you will have let's see if the local zone is
in Japan region. You can see that there
is one local zone because Tokyo is very
highly populated. So they have a local
zone over there. So which can actually give faster response
to the customers, as well as the people
who's using AWS as well. Let's see if Mumbai
has a local zone, Mumbai also has two
local zones over here, which is, again, used
for um you know, faster communication,
and it will have user basis accessing
your application much faster because you are
closer to the local zone. Now, Mumbai region doesn't have a Waveland zone must
be because, like, the five G is recently released in India or
could be because, like, there is no
autonomous vehicle or ARVR as much popular. So once it gets popular and people start
developing stuff using it, and when you have the real time gaming coming into picture, then AA Blas will be releasing
Waveland zones as well. So right now we understand what we
understand so far is that the compute service storage service database service will be hosted closer to the
highly populated areas in terms of local zones. Now, this radios latency for application that requires
low latency access to on promise data center or
users in specific locations. Local zones are useful for workload that needs very low
frequency communication, but doesn't necessarily
requires ultra low uh, latency of wavelenzon. So this is the actual differences
between both of them, and this is exactly how latency looks like
when you do it. Now we have done all
this latency check. Let's do a latency check
for our public IP address. Now you know this
public IP address over here doesn't have a public
IP adress, ignore it. So let's take the
public IP address over here. So we'll
take one of this. It doesn't matter
which one you pick it. So if you just ping the public
IP address of the server, you will see that it has
the latency over here, but it is very less. Take the public IP RS from
this one and try to ping this. It's not working
for some reason. Let's just refresh this. Let's try this one. Yeah,
1.554 milliseconds. So this is basically your
public IP address response. But what you found, which
is very good for you to understand is that when you go region to region,
you will get this. Now, you can also try ping google.com and
you can also get the, you know, time it takes for
you to respond over here. You can also check
awsotmazon.com and you can actually
check the time over here. So this is much faster than
google.com. You see that. So this is, again, coming
from cloudfront.net. So this is how it
comes basically. So here also, it is coming from, I think it is also coming
from something like a CloudFrint which is
from Google service. So, likewise, you know,
this is basically, um, you know, how to find the latency as it's
coming from Cloudfrint. This is a very good example. You can see that
it's much faster. Than actually browsing
at public IP address. Here I'm browsing I mean, sorry being a public
IP address over here, which is 1.25 and 1.54. The average will be high, but here it is 1.25 and sometimes it comes with
zero point as 0.8, and it will reduce the average. You can see that 11 over here and the average
over here is 13. Which means Cloud friend
is really working. I'm in the London
location right now, the Cloud front is coming from the London location
at this moment. Thank you again for
watching this video. I hope that it was
helpful for you to understand how latency works and how your application
works in terms of latency. We have direct
connect here as well, which we missed to explain. Now you can see that
public Internet over here is the highest
one where you have the higher variable latency due to the network
conjection and routing. And the direct connector
we discussed about it. So this is basically having, um, your AWS infrastructure directly connected to your uh,
private infrastructure. So in this situation also, you get like one millisecond
to ten millisecond. Now, this is, I mean, wavelength zone is
much faster than, uh, the direct connect itself. You see that how popular these
two are getting right now. These two are recent inclusions. That's why they made it
part of this certification. So these are added as part
of this certification. Though there is no questions
about them in this skill, but still it's very good to know about these two new services. Thank you again for watching this video. I'll
see you in the next
62. Labs Auto Scaling Group With Dynamic Policy Hand's On: Hey, guys. Welcome back
to the next video. In this video, we are
going to talk about one of the important
service on AWS, which is your auto skilling. Now, autoskiing is
part of your EC two, and it is mostly called
as EC two Autoskilling is what we're going to
see because there is other auto skillings
out there as well, and we're not
talking about that. So if you go to the EC
two console over here, you will see on the bottom
autoscaling groups over here. Now, this will give you a brief idea about auto
scaling, what it is, and you can read
the documentation, you can see the benefits
and stuff like that. But we already know
what is autoscaling. We already did configure autoscaling on the
previous videos. We did configure auto scaling on ECS instance when we configure auto scaling
was part of it, and there was no option
for you to disable it, and that's the only
way it goes on. Let's create the auto
scaling group over here. Now, give a name for this
auto scaling group over here. I'm going to give this
as a HTTP server AG, AG as an auto scaling. Now, do remember that you need a launch template for you to
create auto scaling group. Now, what is launch template? Why do we need it
all of a sudden? Um, let me explain you that. If you go to Easy to console, you have multiple instances
running over there, right? So if you look at these, um, signore all the
terminated instance. But then if you have certain
instances running now, can you make those instances part of your auto
skilling? You cannot. If you already have
instances running, you cannot make it part of
an auto skilling group. You have to create instances from the scratch through
auto skilling group. That's the whole agenda
of auto skilling. Which means that I cannot add an existing group of instances
into auto skilling group. Second thing, if you remember, we did the ECS. So in that ECS cluster as well, we had to create new
auto skiing servers through auto skiing through
ECS like the servers, which is part of a
serving the containers. We had to create a new auto
scaling group and then through that you'll
be able to create surveys and then
containers within that. Same thing or same logic
applies over here. Your existing instances
will not qualify. But what you can
do is if you have existing instance running and you want to make you want to, you know, create such
instance type as a template, you can select that and
you can right click it. And then go to image and template and create template
from this instance. So what it will do is it
will create a template, just like this instance
on the template section. And that template,
once you've created, you can come over here and then select that template which you've created I'm sorry,
which you have created. Now, if you don't have
any launch template, just click on Create a
launch template and we will create a new launch
template right now. I already have a launch template
which is EC to this one, this particular launch template. So we will be looking
at it how to do that. So the HDDPI EC two
was the name of it. So this is a launch template. And very simple details you're going to give
about launch template. It's not like, um, it's a complex stuff for all. So we are going to give a
name of a launch template. We are going to give
the instance type. Which includes instance
size and instance family. We are also going to specify auto skiing guidance over here whether you need
any guidance for this. Let it be enabled, we will
go with the default stuff. You are also going to choose a quick start,
operating systems, what this particular template is going to have as
an operating system, so we're going to select that. Then here in the instance type, we are going to select the
free eligible instance or whatever instance you want to select and then give up keypare. Kepare is important because those service which was created
as part of auto scanning, we use the same key pair here. Now, you don't have to
mention any network settings, we can do later as well. No volumes is needed. Because as you can see that the general purpose volume is already getting
assigned over here, you can increase
this to 20 GB or so, where in which you can
accommodate more file size on each system, each server. So create a launch
template over here. They should create it for you, but this name already exists, so I'm just going
to ignore that. I'm going to go over here and select that instance over here. I've created it in the same way I just
showed you right now. Do keypair, uh, you have
to micro over here. It's not a spot instance. Sorry about that. So click on next over here
and you will see the requirements in terms of the launching
options over here. So there's not much of
a change over here. This has been imported
from your launch template. Okay. So from your template, which is reported there, right? So these settings are
important from there. One of the important
settings you're going to see is the
network setting. Here we're going to
use the defol VPC, and here we are going to
select all available t zones. So our service will
be created on any of the availabt zones whenever we request the service
to be created. Now here, uh balanced
best effort for availability zone auto scaling
will automatically attempt in terms of setting to the
healthiest availablet zone. We will keep that as the default option,
go to the next page. Now, these are the
options in terms of your autoscaling advanced
options over here. Now, to remember
that these options can be modified
later point as well, so you can configure
it over here right now or you can configure
it later as well. So we don't have any
load balancer we're not understood on the hands on
about the load balancer yet, we will be looking at
it later point of time. But if you have a load balancer, if you're planning to attach
an existing load balancer, you can do that. That load balancer becomes part of your auto
skilling group. Then here VPC lettuce basically talks about
the facility of adding associating your
VPC lats target group with the auto skilling
target group, which makes it easy for the
network availability for uh, improving the network
capacity and capability, and it is also easy
for scaling as well. Now, though if you're selecting
attached to the VPC lats, you need to have a
VPC lats created, but we are not done it so far, so we're just going to go
with the default option. Now, here in the health check, we're going to use the
default option over here. El check by default
enabled for the EC two. You will have a
monitoring section where you can actually look at the monitoring of all your EC two instance as well as your
auto scaling group as bow. Check increase the availability of replacing unhealthy nodes. But by default, it's going to be or re enable so you don't have to really
worry about it. Here, this is spread service and volume will be attached
and it will use for that, but it is again on the
separate charge based. Terms of monitoring, use the default monitoring which comes with the
autoscaling group. We're not going to enable any monitoring on Cloud watch or instance warm up at this moment because it's all
charged separately. So we're going to go
over the next one. Then the desired capacity is, minimum capacity is one, and maximum capacity is five. Let's talk about this. What
is the desired capacity? Desired capacity is basically the current capacity when you create an auto
grouping instance, immediately, what is
the current capacity of the auto grouping is one. You cannot go to zero because your minimum
capacity is one. If you select desired is
zero and minimum is one, you'll be creating
a server instance with a zero capacity. That is not going to
work out for you, so it should be always
the minimum and the desired should be the
same when you create it. But then what happens is
if the system realizes once the policy we have configured we are not doing
any policy at this moment, when we configure a policy, based on the policy, the
desired capacity would change. If I specify a policy like a targeted policy
on a dynamic scale, I will be saying that if my CPO average reaches
50 percentage, you need to spin up more
service to handle that request. Now, when when you are in
that kind of scenario, your policy gets enabled and it will look for
the CPUtization. If it's crossing 50, if it's crossing 50, then it's going to increase
the capacity automatically. So that is based on the policies which
you're going to do. So if it goes for two, then your desired
capacity will be two. You minimum will be
one, maximum is five, which means that it can go up to a maximum of five in terms
of desired capacity. Okay? So this is what we are setting as our
rule over here. So you can scale out 1-5 scale and back to one because that's the
minimum desired capacity. Here in the maintenance policy, we have given multiple
options over here. This is basically to say that how many servers should be available when an instance
go for maintenance. For example, there are
some maintenance which happens in terms of
instance refresh, and what is the maximum number of instances should
be available. So in that situation,
currently no policy, which means that it
is a rebalanced one, which means that the
new instances would be launched before terminating
others existing instance. Here is the same thing
launch before terminating, which means a new instance
would launch and wait for them to be ready before
terminating the other instance. So this is priority
availability, so this is recommended. At the same time, you can launch and terminate at the same time. But this involves
a little bit of, you know, it is cost saving, but then it involves a little
bit of downtime as well. So it's always better to go for prioritize availability
because that is the Um, that is a little bit money you need to spend because
you're going to have this instance lodged and wait for the instance to be
ready before terminating, which means that you're going
to have an extra instance, which is going to be ready. And then once it is ready, then it will be terminated. So this is always
ready, I'm sorry, this is always
recommended in terms of working with a real time
enrollment. All right. So this enables
scale in protection. When you enable it, whenever
your server gets scaled out, when it tries to scale in, it will protect it
from scaling in. So which means that the
server size will not reduce because this
protection is enabled. So disable this as of now
and click on next one. You can add SNS topic for this particular
instances where you can send notifications with a simple notification
service over here. Click on next. We don't
need that right now. Add tax over here if you want
to add text and then click on Review and Create
Auto scaling group. We have greater
auto scaling group. Now the capacity is updated. One instance needs
to be created. You can see that on the
instance management, what instances are being
created over here. So one instance is created
and one instance is running. To have good visibility, you can go to the
EC two dashboard, C instances and you can see that one instance which
is running over here. Now, this is the same instance which is configured over here. So this is basically spinned up by this particular
auto scaling. So you can see that
the activity screen, you will be able to see the actual increasing
the capacity 0-1. As of now, we have not done
any policies over here, these are the default
options we have put in. Whatever the default
options is over here, you can modify these
options. You can see lats. You can see balancing, you can see health checks. You can see instance
maintenance policy over here. You can see advanced
configuration in terms of the um, working with the
autoscaling group. Now, where is the policy? Policy is over here on the auto skiing or
automatic skinning. There are three different
types of policies over here. One is the dynamic
skilling policy, predictive, and
scheduled action. You can select from
either of this policy, which can actually
make this capacity go up and down based
on, um, you know, instance, um, load and average whatever
policy you're configuring. You have the monitoring
screen over here, here you can see
the monitoring of your auto scaling
instances here and you can see the EC two
instances over here. If you have multiple
EC two instances, it's going to bring up a
consolidated version of it. Now do remember the monitoring on these instances
would take a little bit of time for it to accumulate data because every time
you start a new instance, the data of that would take time to populate over
here. All right. So now let's talk about
the policies over here. The first policy I
want to talk about is creating a dynamic
scaling policy. A dynamic scaling policy clic on it has three policy
types over here. You have the target
tracking scaling. You have the step scaling, which is basically,
let's talk about target tracking scaling first. Now in this one, you'll be
monitoring a specific type of resource or metrics
in your EC two instance. For example, average
CP utilization. So when you see an
average CPU utilization exceeds a target value
of 50 by percentage. It's going to create new instance or it's
going to scale out. Now, you can create something
of warm up instance. Warm up instance
basically is it will give some specific
timelines as in seconds for the instance to warm
up and then come in or be part of the
auto scaling service. This example I can
give you in terms of when you're working
with applications, which takes about
five to 10 minutes after the instance starts up. Then in those situations, it is always recommended to give that particular seconds of
five to 10 minutes seconds, convert that into seconds
and give it over here. This will ensure
that the instances would not come in to the auto
scaling group unless it is, you know, it get passed through that 300 seconds or whatever seconds you are
going to give over here. Now, that's the
first one over here. The second one is step scaling. Step scaling is basically it uses the integration
with Cloud watch alarm. Now, when you create
a Cloud watch alarm and this alarm goes off, and then basically the
action is taken over here. Now you can add multiple options over here where
you can add steps. The step scaling is basically
step by step with scale. So for example, if my alarm
reaches 50 percentage, I want to add one capacity
or one unit over here. Right? And then I say add, and then I select Cloud Wash don't have anything right now. You can see cloud,
there's no one. So when you add this one, then you can add
another step over here. And the next step could
be for scaling down. So it's like if it reaches 40% day, you want to scale down. So likewise, it can be
for adding capacity, removing capacity as well. So likewise, you can
scale up and scale down based on the step using
your Cloud wash alarm. The next one is simple scaling. So this is also using a uh, you know, cloud who um, the same kind of thing, adding, removing the same thing, but you will have only one
option over here to do it. So this is more of
a simple scaling. So let's go with the targeted
scaling in this item, and then we will go with
20% as a targeted value, so it's easy for us to do this. Lklicon create and they should create your
tracking policy. And this tracking
policy will be tracking your instance instance average
CPUtization over here. Currently, your average
CPUtilization is about 2% age. Now, we have a command
over here which can be used for increasing the
stress on the system. You can use the pseudo
EmiphonY install stress. So this is a tool, and then
you can run this command stressiphon Iphon
CPU.To is 60 seconds, or you can make it
600 seconds also. So it depends on how fast
your application is starting. So to do that,
just go over here, select your instance, right click and connect your instance. Once you're connected
to your instance, you can run both these
commands of installing stress as well as executing
the stress as well. So I'm installing
stress right now. Let me execute stress. Now, this is going to
execute stress for 600 seconds for one CPU because this is a T
two micro instance. So it has only one CPU. So you can itself check it over here on this
monitoring screen, whether it is getting
increased or not. So it's not enabled. Okay, so it has come through. You can also see that
over here on the EC two under autoscaling
group passport. So we are expecting this to increase as we have
executed this command. This command is going to push load into one CPU
for 600 seconds. So this should take
a few more minutes, five to 10 minutes, sorry, two to 3 minutes
for it to reflect those numbers into
your monitoring, and then you should be able
to see the increased load, and if it goes beyond
20 percentage, you will see the desired
capacity would increase. So I'll come back to you
and sometime I'll pause the video and pause and
show you how this is done. Hey, guys. Welcome back. I was out on a little break. So, when I ran this command, what has happened is like I
was out for a little break, so a little break as in
more than 15, 20 minutes. So what has happened is like the capacity was increased 1-4, as you can see, and then it
maintained to four to five. Then the execution of the script was over and then it
started terminating 5-4, and then you can see
four to three and then three to two
and then two to one. It started reducing it
and started deleting, terminating those instances,
which is not needed because the script completed and then
the file system came down. I think in the monitoring, you will see the exact
statistics of what has happened. But unfortunately, the
monitoring has not come through. Okay. So you're not able to find it, but this is the overall
picture of the CPU statistics. I'm just going to do that
again one more time. This is the running instance, so I'm going to increase
the CPO on this one. I'm just going to connect to this particular
instance one more time. I'm going to run these
two commands in there. So let's just wait for it. All right, so now this
command is running now, so you will start seeing the changes very
soon. I'll be active. I just I was distracted heavily, so that's why I left this. So let's see if this increases
the load on the system. Spare with me. It should
come in very quick. So the load started to occur. I think this is the am
installation due to the am installation of a
package stress, right? So the network, you know, there was input output. So that's why it
in network is up. But the CPU should
be up momentarily. So let me pause this video and let's wait for
the CPU to come up. Okay, guys, so I can see that update is
happening right now. So you can see in the
monitoring over here, the ACPU instance
has reached 49%. So if you see the
activity over here, so it is going 1-3 over here. So it has understood that target tracking policy
has been breached, so there's alarm triggered and due to which the
incident management, you see two more instances are formed and it's in service. If you go back to the
autoscaling group, you are seeing the desired
capacity has become three automatically
without our intervention. Now, after the 60 or
600 seconds is over, or if I do a Control C, you will see that
instances which it has created will
automatically terminate, and then you will start seeing the services come down to
the desired state as one. I've canceled it. If you
see this EC two monitoring, you will see that it will
gradually come down right now, and then you will start
seeing activities like going 3-1 automatically
without our intervention. Let me pause this video
and show you that as well. Let's look at it right now. So I'm going into
the monitoring. So the load has come down. Let's see how many instances
we have right now. We have three instances. Let me see if there is a
plan for bringing it down. So eventually after
a few minutes, it should get it down, so it is not triggered
the scaling in process, but you can protect the instances from
scaling in so you can actually go to action and
sit scale in protection. So this will come as
protected from scale in, this will stop this particular
server from scaling in but I don't know why
anyone would do that, but just that I'm telling you. There's also another
option over here where you can stop scaling in
on the older survey. Here there must be
option here somewhere. In this particular item
called termination policy, termination policy can
be something like, it can be set like old instance. This policy is
basically the order of termination will be basically
the newest instance, which means the newest instance
which was created will actually get terminated first rather than the old instance. Old instance would be there running forever until it's not really required
to terminate. And here you can enable instances for scale
and protection. Every new instance will
be created by default. We'll go on, you know, skin protection which we
don't need because we want to scale in if there is
no really requirement. So that's the whole
point of dynamic, where you scale out and you scale back in if there is not
much of a request spending. We'll give it some more time, see if it scales in
because it did previously, so I think it should it
is just a matter of time. Let's just hold on for
it to start scaling in. All right, guys, I
don't want to hold you guys any further, but this will, you know, reduce it
and then I'll post another update on this when
I'm creating another video. So that I'll show you that. But, it's normally takes about 5 minutes five to
10 minutes for it to realize that there is no more load on the
system and then it starts to scale in so
don't worry about it. I'll show you on the next video. Before I start the video, I'll just give you
a quick update on this to show you that
it has done that. In the next video, we will be talking about the two
other more policies, and then we will come to a closure of auto
scaling groups. Thank you again for watching this video, see you
in the next one.
63. Labs AutoScaling Load Balancer Hand's On: My guys we come back to the
next video in this video, we are going to talk
about load balancers. Now, in this video, we are
going to proceed further. Previously, we configured
Easy to load balancer. I've dominated my instance and the load balancer
configuration as well. Now in this video,
we are going to work with our load
balancer over here. So currently I have a load balancer and
let me delete this. All right, I have that director. Now in this load
balancing session, we're going to use
autoscaling group as our load balancer. So at this moment, you don't have any
autoscaling group. Not a load balancer
available for you. For creating any
autoscaling group, which we have done earlier
is to launch a template. Let's go ahead and
launch a template now. Clicon launch template
and then let me name this template as web iphon. It's for AG, then I'm going to give As autoscaling,
autoscaling group, ASG we can name it like that
and then launch template, so we're going to make it as launch template LT.
Now this is going to be our web launch template
for our autoscaling group. Now when you are creating a template for an
auto scaling group, it is recommended to enable the autoscaling
guidance over here. Which will help you set up the template for EC
two autoscaling. That's the first thing
you're going to do. After that here,
you're going to choose the quick start and you're
going to select Amazon Linux. Then you're going to
create an instance type, which I'm going to
choose T two micro and then the default key
pair I'm going to use. And here you can select an existing security
group over here. So I'm going to choose
the default one, which allows every traffic so that I don't have to worry
about the groups over here. Subnets, you can select two subnets or one
subnet by default, or you can just leave it as include on launch templates.
So that also works. So you don't have to
really worry about this, and then you can remove
the network configuration. Don't need it. Only select the security group which
has Internet access. Here select the GB
size, making it ten GB. Now, finally, you
need to go down on the advance
details over here, in this details, you're going to copy paste this item over here. The previous updated 11t instead of hello from EC two instance, I put hello from
Easy to auto scale. Copy paste this over here and then click
on Launch template. Now, this is going to create
an easy to launch template. This is pretty easy.
Launch template is created. Let's verify. Now the launch template
has been created. Now the next thing is going
to create autoscaling group. Now, you can have your load balancer ready for the
autoscaling group if it's there, but you can create
a load balancer here itself on the
autoskiing group, so you don't have
to worry about it. Pick create an auto
skinning group, give a name to it
autoskeing group. Then here use webfen auto skiing group
LT launch template. Then the Df details gets loaded
over here. Click on next. Now here you have to select VPC over here and select
the availability zone. I'm going to choose
all availabt zones here so that my service can be distributed across all
availabt zones, click on Next. Now, this is where you can choose autoskilling group
without a load balancer or attach an existing
load balancer where you can choose from the existing load balancer
which you have out there, or you can create a
new load balancer. Now when creating a
new Load Balancer, auto skilling group only
allows you either to create an application
load balancer or network load balancer. As this is an
application content, we will create application
load balancer, and then the load
balancer name is going to be HSLB or ALB. That says that's
application load balancer. Internal Internet facing
choose Internet facing. You cannot disable this
subnets because we have made Auto scanning group out of all the subnets or
availability zones, all availability zones are selected for your load
balancer as well. And then here you have
the HTTP port number, you can add um other
port numbers as well. Once you've created
this not right now, but once you have created it, you can go to the load
balancing console and you can add more listeners to
the existing load balancer. But while creating
this, you can only add one listener over here, which is eight listener, that's a default one. Then here, you can
choose the target group. Now you can choose an existing target group or
create a new target group. This will actually create a TG. I'm just giving a
name over here. I will automatically
create a target group. You don't have to specify it. Now, if you remember,
on the previous one, we created a target group. In that target group, we have um we had list of
instances which was created, and then we imported the
existing instance to our target group and
then we saved or registered that
particular instance part of the target group. Now, you don't have
to do this because as you're creating AG um, or auto skilling group, you will be whatever the auto skilling group instances
is going to be created, it will be automatically
added to this target group. You don't really have
to add each instance or register each instance
to your target group. I was going to be
automatically added. That's why I said you create a target group as part of the load balancing
configuration. Then once you're done with this, here we are saying
we don't need Lets VPC and then we're going to go with the
default configuration, enable any of this
monitoring service or anything because it's going to be additionally charged. Click on next This
capacity is one, minimum capacity is one, maximum capacity is five. So auto skiing policy
optional, target auto scaling. So here there's a
option of choosing what kind of a policy
you want to select. So there is option of no
auto skiing policy where you can go ahead and do
it later like we did, or we can go for the default
one which is targeted autoscaling and a CPU average
value of 80 percentage. So once CPU average
value reaches 80% teach, this is going to spin
up new instances. Okay. Now, prioritize
availability by launching before terminating, launching an instance
and getting it to register before terminating
for maintenance policy. And then we don't want to
enable the scale in protection. So we have already discussed about it on the previous videos. Let's go to next screen, and then we don't want SNS. So let's go to the next screen. We don't want add any tags, and then click on Create
Auto screening group. Now this auto skilling
group would be created, and then your load
balancing will be created. So all those action
items will be created, you can see that one
autoscaling skilling policy, load balancer, one target group, one listener, and one
instance as well. So you will see an instance
will be running now. Okay? In instance is
created first instance, it's a desired state. Load balancer would have
created to Banza is created, target group would
have been created, as you can see over here, let me remove the old one, that doesn't cause
any confusion. Then you have the auto
scaling group created. When you go inside
autoscaling group, you have that it does reach
the desired capacity, minimum, maximum capacity
is defined over here. You've got all the other
configuration items. If you go to the scaling policy, we have enabled the
dynamic scaling. Which is for 80 percentage
over here, CPUtilization. Instance management, we have only one instance configured, that's the desired one and then monitoring and other stuff. We go to the load balancer, it call the load balancer. You're going to get
the listener rules. So the default listener
is configuring and add additional
listeners if you need it. And network mapping, you can see that this map
to all the zones, all the availability zones
available under the region. If you see this
resource mapping, the HTTP AT listener connected to the rule
of the target group, and then the target group is presented and inside the
target group already, the server instance which we have configured is
already coming in. So don't you really didn't
have to do anything over here. So the target group
automatically pointed to this
particular instance which is running over here. Let's browse the website,
see if it works. Firstly, we'll browse that individual website,
see that one works. Let's copy the IP address
pasted over here. There you go. Hello
from autoscale. Easy to auto scale. The instance is working. Let's see if the load balancing
URL is working or not. Copy the DNS name over here, and then put it out there. See if the load
balancing URL works fine and low balancing URL
works fine as well. This is a good news.
You're able to directly browse it from the instance as well as from the load balancing. Now, if there is a requirement that the server requires to be scaled up or I'm
sorry, scaled out, uh, you can do this autonomously because
you have a policy set, which means that when
your CP utilization reaches 80 percentage, you're going to bring up more instances to
handle the load, and it will also scale
in because we are not enable the protection
of scale in protection. Thank you again for
watching this video to remove this remove the
autoscaling group, so most of the items
would be gone. And then you may have to
manually delete your um, target group, load balancer. You have to manually delete
your launch template as well. So that's how you
need to roll back. Thank you again for
watching this video. I'll see you on the next one.
64. Labs EC2 Load Balancer Hand's On: Hey, guys. Welcome
back to another video. In this video, we are
going to talk about the hands on experience
on load balancer. In the previous video, M understood that the load
balancer is something which is available under the
EC two service dashboard. And under that, you're going
to have the load balancer. Now you're going to have multiple load
balancers available. So firstly, we're
going to configure application load
balancer through a separate EC two instance. Now, let's go ahead and create a EZ two instance
for us to work with. Now, this EC two instance should have something
to, you know, something which is hosted on it, so that's easy for
us to browse it. Or to access it. Let's take HTTP is a very common example
which we normally use because that's the only
thing which you can actually browse on our website. So that is much easier. So what I'm going to do on this video is that I'm going to first launch an
instance of an EC two, and I'm going to create it as webpon server or siphon one. So this is the first
instance of web EC 21. It is highly
recommended that you do auto scaling in this rather than doing an EC two instance launch because when you
launch instance, it basically means that you have only one server
instance of that. Not a problem, you can add another server and then you can put them part of the group. So that is also there. But when you create service
manually like this, there is a good chance that someone else might
delete it also because even you have delete protection and
all those things, but individual service like this does not come under load balancing
or high vaibility. So that is something
you should consider. But in this example,
we will do that. On the next example,
we will talk about autoscaling and adding the
load balancer to that. We will look at that perspective as so we will choose
Amazon inuxie. We will choose D to Micro. We will choose the default key, which I have over here. We will choose an existing
security group which we have been already using
called the default one, which allows the traffic
connection from anywhere. So when clicon
compare group rule, we can see that it is allowing everyone to connect
to the server, I'm sorry, connect to your infrastructure wherever the security group
is assigned to. So I just hit Cancel. So now the security group is something which we're
going to access everywhere. Then here I'm going to
configure story ten GB. It's not needed that
much of a space. HGB works fine, but
I'm going to do that. Now, do not launch the instance. As we are going to do this. There are two ways
of doing it, okay? You can launch instance, and then you can connect to the instance and
then install HTTP and configure HTTP
there. On the instance. But rather than doing that, we will do it with
this one only. When you click on
Advanced detail over here and when
you come down, you have something called
user data optional. Here, you can type hash and
then exclamatory Bin Bash, and then you can type
um update, IphonY. This will do a general update, and then m install IphonY HTTPD. This is the process
we are planning to install and then echo. This is EC two
instance H GMO page. You can put H one also here, this will highlight
it. Then H GMO. You can copy this and
put this on the front as well and remove the slash. So this will echo into a file, which is basically the
default page index to HMO, underwear, W GMO, index H GMO. So this is the file
it will update it to. And then system CTL, start HTDPD system
CTL, enable HTDPD. So this will start
the HTTPD service and enable the
HTTPD service also. So when the system restarts
is basically starts it up. I have put this one over here. You can take a screenshot of it. This is basically the command. It's very simple command, there's no complication here. Basically, I'm updating
Yum, installing HTTPD, creating this particular line, and replacing it on the
default index to HGMO. I'm starting and enabling HTTPD. That's it. Now just click
on Launch Instance. Now when you launch instance, you don't really have to
connect to the instance and install HTTP because these items are going
to be executed as part of the instance creation. After launching this instance, if you just go to running instances and
refresh this and then select this and you have the
public IP address copy this public IPRRspase over here and type HTTP colon slash, and you are expected to see that page which we
just spoke about. It may take some time because
it may it will update Yam, and then it will
install the components, so it may take a
little bit of time. But you should see that.
Hello from Easy to install. Let Control refresh. I think I replaced it hello
from easy to instance. I think I replaced it by
copying somewhere else. So this is the wordings which
we have included finally. Hello from easy to instance, and that's the same pages coming over here on this IP address. Perfect. Now that we have completed the configuration
of the instance, let's configure
the load balancer. Now we're going to choose
the default load balancer for any application which is your application load balancer. Here we're going to name it as webby fin ALB application
load balancer. Now, you need to make
sure you choose Internet facing or else you will
not be able to browse it. You will not get the DNS address and you will not be
browsed from my desktop. So make sure it is Internet facing and make sure
it is IP address, Version four is enabled. Dual stack means you will get Version four and Version
six and Dual stack without version four means
that you will get only Version six IP address. That's pretty much we will go with Version four
at this moment. Sorry about the
background noise. So here I have the VPC
configuration over here. Now, in this VPC, I have all this
availability zone. It is mandatory that you should select two availabt zone at the minimum and one subnet
each of this per zone. You can also select
all the zones. There's no issues about it. So your service, which is applicable in that
will be part of it, and your low balancer will be part of all the
zones as well. So let me select two or
three zones which is needed. So I'm going to go with
four zones over here. The security group
is the same what were selected for
the default one, so the default one is selected. Then here is the
listener configuration. This is the very
important part of it. Let's just say, for example, you application works on HTTP or it works on
HTTPS, either one. But if your application
works on both HTTP and HTTPS as well,
there's no problem. You can add another listener
over here like add listener. In this one, you can
configure HTTPS. You can see 443 and then
select a target group. But my application currently
only runs on HTTP. In that scenario, I don't want to add another
listener for HTTPS. But what you have to do
is we need to select something called a
target group over here. You will not see
any target group. I'm not sure why I'm seeing
target group over here. Then me remove
that target group. I think I did remove action D It's removed. Let me refresh this. Got it. Target group, what is a target group? Why do we need to
create a target group? Now, we are creating
the load balancer? But in this load
balancer configuration, which we have done so far, There is no method of specifying
the destination, right? We are saying that my load balancer should
be Internet facing, so people from the Internet
should be able to browse it. We are saying network
configuration for my load balance, my load balances should
be established in this, um, four availability
zones in this VPC. But we're not saying
where it needs to go, where the request
needs to be routed. In our case, our
request to be routed to our instances,
which is over here. Currently, the running
instances only one, which is web easy to one. Now how I'm going to target this from the road
balancer side. That is called a target group. A target group targets either, I'll show you when
I click on Create. When I click on Create, you see that, it can
either target instances. You can select
multiple instances and make it part of
the road balancer, but it's highly not
recommended to do in this way, but this is the primary
way you should understand. When you're configuring
the load balancer, this is the first
thing you need to do. To configure the balancer
through instances and then through auto scaling and then
through Lambda function, whatever you want to do,
you can do it later. Now, this supports the load
balancing of instances within a VPC or specific VPC and also facilitates the EC
two auto scaling as well. We'll talk about
daughter scaling later, how are we going
to configure load balancer for a later
point of time. Or you can select IP
address where you can support load balancer VPC and on promises
resources as well, and also it
facilitates routing of multiple IPaddress and
network interfaces. Rather than instances, you
can choose IPaddress as well. Or you can choose Lambda
functions as well, or you can choose the application
load balancer as well. Application load
balancer, again, part of the target group. So when you are doing the
network load balancer, you can use the targeting group to target the application
load balancer. Then you can give the application load
balancer details over here, which is part of
the target group, which is the next one. We have not created
application load balancer, we are in the process
of creating it. I've not hit the create
load balancer button, so it's not created yet. So we will go with
the instances option. Here I'm going to make it
as web iphone target group, Iphone TG saying target group. And here I'm focusing on my target group will be focused
on this TCP port of 80. My um applications,
which is going to be running in this instance
will be at this particular, um, protocol and,
um, the port number. You have the option of selecting
directly HTTPS as well. Okay, so which is targeted
to AD port number. So instead of selecting TCP, you can directly select
HTTPS, sorry, HDDP. So if you have HTTPS enabled
on your application, you can select HTTPS. But in this vanilla HTTP, we are only enabled HTTP. Okay, not HTTPS. Now, here you can
see the default VPC. You can see the HTTP protocol. Here you can see some
of the attributes over here on the
HTDPELth check protocol, the context root, which
is your root directory. Likewise, you see some
default options over here, and then click on next. Now, this is where you need to select or register your targets. Now, as of now, the
available instances which is in the running state is this particular instance. This is where we have
hosted our web HTTP server, just click on Include
pending below. Now include as pending below. When you do that,
you can see that this available instance becomes
your targeted instance. Once it has become your
targeted instance, then you can click on
Create Target Group. Now, once the target
group is created, you will see the total
targets over here as one and currently
unused and initial. Right now it has not been map under any load balancer,
but we will do so. Now that you have created
the target group, you can close this
and then go back to your previous screen where you have to select
the target group. Just click on refresh and then select then target group
which you have created. Now that you've selected
the target group, you can go ahead and click
the create Balancer. There are some advanced
options on the target group, which we will talk
later point of time when the need arises. But as of now, you can
see that the listener and the rule says over here is
that have GDB protocol AD, which is forwarded to
your web target group. Web target group will have the instances which is
going to be part of it. Stickyession is stickiness
is sticky session is nothing but a resource if a request being routed to a
specific server instance, it will make sure that
that stickiness stays on. Every time that information or the browser cookie comes in, that particular user keeps
on refreshing the request, he will end up in the same serve which has been processed
for the first time. That is called sticky session. That needs to be enabled
at the target group level. That is disabled, that's okay. Now you can see that
you are getting 100%, which means it tells
you that the service which is configured within this target group
is available 100%. Then there's one rule, and if you click on the rule, you can actually understand
the rule information, like how the routing happens. One of the important things
I want to show you in the load balancer is
the network mapping. In the network mapping,
you can see all those, um, you know, servers or sorry, all those available
T zones and sub nets for these available T zones
are mapped over here. Another important
thing I want to show you is the resource
mapping over here. Resource mapping shows you
clearly the way it is mapped. So when someone comes in and
they access this DNS name, they will get to this
HDP AD and then that basically has a rule over here which forwards to
the targeting group. The target group name
is web iphone TG, and that targeting group has one target within
that particular uh, group and there is
nothing unhealthy and the targeting group is targeted to this
particular instance, the EC two instance over here. It gives you the complete
hierarchy and how the requests will be passed
on to the next down the line. This basic information
will give you information on how the
information is passed across. Now you can see the monitoring wouldn't have come up so far, but monitoring tells you about the ELBs performance, how many, 500 errors, 400 errors, and all those errors which is as focused or you just received. You can also integrate it with other AW services, which again, uh incurs extra additional
cost over here like WAV or config or global
accelerator likewise. I will give you the
complete information on those action items as well. We're not here to configure
all those things. I security groups, we
have the SG which we have created by default on
the previous videos, the same security group we
are choosing over here. This pretty much sums
up the configuration. Let's go ahead and browse this. For browsing, it copy
the DNS name and then just replace this
instead of the IP address. When you hit Enter, it goes to HTTPS, just make sure you select HTTP. I think it came on HTTP. Now just to control
refresh and you can see that this instance
is coming over here. This is pretty much
what I want to show you if you create
another instance, for example, what we
will do is we'll create another web iphone
EC two often two. Then in this instance, what we will do is, we will
leave everything as default. In this instance, we will
copy paste the same thing, and then we will say,
like the Ken server here. We will give this
additional information here saying that this
is a second server, so I'm going to select
the key pair and launch. So now it's launching
the instance. Now that we have added a new instance to
our existing server. Now let's wait for
the instance to initialize, so
it's initializing. Now what we do after it initialize we will
browse this instance, see if this instance
works fine, individually. Uh huh. I think it's
still coming up. Yeah, it would take some
time for it to come up and then yum to be updated and installing components
and all those things. It's going to take
a bit of time. So let's just see
if this works fine. I'm going to pause
this video pause it once it is completed.
I don't want to hold you Well, it's not
working because of the security group
configuration, so I'm just going
to modify that. So I'm just going to change the security group
to something else, which we normally use. So I just going to go to
security, change security group. And then I'm just going
to remove that and I'm selecting the default
one and adding that. So this should fix the problem. Like, yeah, say here. Now you've got the second
instance name over here. So when you browse
the first instance, you're going to
get this instance, and then the second server is going to come
up with this one. So as of now, we are
browsing it individually on this IP address
directly on the system. Now, what I'm going to do
is I'm going to add all I have to do is I don't have to modify the
road balance here. I don't have to do anything.
Go to the target group. Um, select the target group and then all I have to do is
register this target, just click on register target, select the second instance
which is over here. Already first instance is here, select and second instance
of selected and then include on the pending below and then
register pending targets. Now you have two targets. Now, this is a
very good example, I want to show you in terms
of how sticky session works. By default, sticky
sessions to sp which means that I don't have sticky
session by default, which means that if I'm going
to refresh this, right, so I'm going to
get second server, if I'm going to
refresh it again, I'm going to get it
from first server. So this is the thing when the
sticky session is disabled. You actually every
time you refresh, you get, you know, your request sent to
a different server. That's basically
round robin method. So if you see the ALB is configured by default,
round robin method. So what that means is that your request
will go one by one. Uh, to every server it has. So that's the deform method of your request getting routed
as part of the method. Here you can see the monitoring
has started working. There are two monitoring
came into existence. One is the response, sorry request the
number of requests which are getting and
the response time. Then a few other came in over
here in terms of errors. So there is this 404 XX error, which means something related to it could be permission denied or file not found or
something like that, could be 400 X. So 404403, likewise, all those
exceptions would come and get displayed over here
telling you the amount of exceptions which
is coming through on your from your load balance. So very useful method
of identifying it. So in terms of targeting group, targeting group has
all the information towards how your request
would be routed. Okay, so right now, this availability
zone has two targets, so you can get that
information from here. You can also see the monitoring of these virtual machines and how many host you've
added as part of this target group and
what's their health. So we have added one to two and all those
two are healthy. That's why it is up. If it's not healthy, it's
going to come down. So there is no unhealthy
host over here. You can see the
healthy host average. You can see the target
response time request and all those things coming
from that area as well. Likewise, you kind of
get all those details. Now, in the target groups, you have the health check
settings over here. So here, it talks about two consecutive
health check failures is deemed to be unhealthy. And then it's going
to mark that host unhealthy and it's not going to be part of your load balancer. So five consecutive
health check is success. So unhealthy host goes to a
host goes to unhealthy state, it needs to do
five health checks to come back to
the healthy state. So interval is 30 seconds
and timeout is 5 seconds, and success code is 200. So 200 mostly in HTTP
language means success. Then there are some attributes over here about stickiness. This is where you can
enable stickiness. Here where you can
actually change your round drowin
mode to weighted average or any other
mode. Just click on Edit. You have the least outstanding requested weighted random and you have round robin.
That's the default one. Then you have the
stickiness over here, so you can select the
stickiness if you want that stickiness to be available. Now I've just selected
the stickiness. Stickiness duration is one day. Now you can make it seven days also.Harly bear seconds
and minutes also. It's always recommended to
have one day sticky session. How do you want the
sticky session to be Road balance generator cookies or application based cookie. Application based cookie means your application will have
a unique cookie name, and you can track
the road balance. The ALB will track
that cookie name and then it will basically
hold that stickiness on that. Load balancer track
means it will automatically generate
a cookie session ID, and that session ID
will be tracked. Now, let's just
save the changes, and then let's look at
our website right now. So now I'm refreshing this. Currently, I'm stuck with this
one server instance only. Now the instance is not
changing that often. You can see that I'm stuck with the second server instance no matter how many
times I refresh. This is what sticky
session means. It took a little time for it
to get through that change. But the moment the
change has come through, whatever say we have done, you can see that I'm refreshing, you can see that it's changing. Because I'm doing
controlled refresh, I'm holding control as
well as FI together. So now you can see
how many other times I refresh it's not going
to the first server. It's only sticking with
the second server. This is called sticky session. I've shown you all those things
about the Load Balancer. On the next video, we will do the same Load
Balancer configuration to, um, through auto skilling group, and then we will see what
is the benefit of that. Thank you again for watching this video. I'll see
you on the next.
65. Labs ECS Removal Hand's On: Hey, welcome back
to the next video. The removal of this could be complicated because of the
work which we have done. So I'm going to show you how to remove the components
of the ECS. So just select the cluster. First, let's go one by one. We need to go to the finest item over here, the deployments. So we need to stop
the deployments, um, delete the task. I mean, stop the task, um, delete the task, and
then you need to delete the service
at the cluster. So we had to go
from the reverse. When we started, we started from cluster task service
deployments, then you need to go
in the backwards. Just go to service,
go to deployments, and then you'll see
that Deployment seven is running right now. You need update
service right now, I make sure your
desired task is zero, this will ensure that
there is no task running. This is equivalent to stopping all the task because
if you stop the task, there is option to stop your
task in the task definition. If you stop the task, it will
automatically start again. First thing is that stop your deployment from
populating any a task, so your deployment becomes zero, so you can see that it's
not deployed anymore. Once you're not
deployed anymore, then you can go to task. You can see there is no task
over here. That's good. Go over to your task definition, select the task definition, and then I think you need to go to each one of
these and select it. Action deregister, you need
to register all of them. You can see that
it's all gone now, this should be good.
Go back to cluster. A cluster name and then
delete the service. If this lets you dt, then you're on the clear
now you're on the clear. We register, we undeployed
and we deleted the service. And then then you
can go to the task. I think it will
automatically be gone now. Yeah, it's gone.
Next this cluster, select the cluster,
delete the cluster, copy paste this and
modify the D to small D and hit the delete button should
delete your cluster. I mean, when we do the
load balancing session, we will create it again. But that time we will do it
manually because we don't want that server instance to
be running until that time. So it may take a
little bit of time to delete the PCS cluster. Once it is done, then we will
remove the name space as well because namespace is very important for us to remove. I'm going to pause this video. Well, deleting the
cluster means it will remove your EC two instance as well because your auto
skilling would be removed as part of the
deletion of the cluster, so you can see there's no
auto skiing group anymore. So that's basically
deletion of cluster. You will have a link over here, which tells you to redirect
to your cloud formation. So now you can see that
it's deleted successfully. It took about 5 minutes, I think 2 minutes for me. A 5 minutes. So now I
think this should be gone. Yeah, there you go. Name space, select the name space, um, and you have the name over here. Well, the name space comes
under cloud mapping, so it's basically the name space on the
name of the name space. Then just hit the t button. This may take about
a couple of minutes for it to complete as pll so you can see
that spinning off. Leave it as it is
just close the tab, it will date and it's not really important for us to have
that removed immediately. So let it take its
time to get that done. Pretty much cleared up all those things which
we just created. We don't have cluster,
we don't have task, we don't have anything
which we just did. We don't have the
autoskiing group or the instance
available for us. Thank you again for
watching this video. I'll see you in the next one.
66. Labs ECS Setup Part 1 Hand's On: Hi, guys, we'll come back to the next video on this video, we are going to talk about ECS. This is the first topic we're
going to start in terms of understanding your
AWS compute service. We've already seen multiple locations and easy to instance, so we're not going
to talk about it, but we are going to create ECS instance and then we
are going to work with it. Now, ECS instance comes
with two options. Either you can go with
Fargate or EC two instance. In this particular video, we are going to use the ECS with EC two instance
rather than Fargate. Further down the line, you will see on this particular section, we will be talking
about ECS using Fargate and working with ECS on Fargate. That's basically a
serverless compute service. All right, so let's go down on this path for the ECS using Est. So go ahead and call upon the service ECS and then middle click it to open
it in the new tab. Now, do remember that this is
a multipart configuration, so we're going to configure
multiple items over here, and finally, we're going to
get our cluster going, okay. So this is the first thing I'm going to start
with in this video. Firstly, we're
going to configure a cluster Clic on
Create cluster. This is how you
can interact with ECS service by having
a cluster created. Now, cluster gives you
the infrastructure for you to run a container, but we will not be creating a container while we
create a cluster. What we will do is first step is to create an infrastructure
like cluster, where in which we
will be specifying the number of instances to
be running to a cluster. And then we will do
a task definition. In that task, we will be
specifying what kind of a container we want to run
as part of this cluster. And we will be also configuring a service under
the cluster and integrate the task and then have
that task run and then the cluster
the service inside the cluster will start
popping up with containers. Let's click on Create Cluster. The first step is to
create a cluster. I'm going to give
HTTP uh, ECS cluster. Now, this is going to be
the name of my cluster. So this will tell me that it
is a HTTP where I'm using this application GeneX
application for HTTP request, ECS as in words, it's what we are doing
container service. And then it's a cluster. So
this is just to define that. When you create a cluster name, you're going to get the name
space also allocated to it, so you're going to see a
namespace assigned over there. Now in this situation, we are choosing EC two
rather than Fargate. Fargate is basically you
don't have to specify any EC two serve instance like
this. If you select this. You can see that automatically, it will create or generate
serverless compute service, which is zero maintenance. But in terms of EC two instance, there's maintenance
required where you will be actually having the EC
two instance creator. Sorry about the
background noise. So here you have the ASG, so this is nothing but
autoscaling group. Auto scaling group, we will talk about at a
later point of time. I think it is covered over here. So we will talk much more detail about outer scaling groups, but here just overview. So we will be creating
autoscaling groups, and gradually it will scale up. So we will normally define the minimum and the
maximum desired capacity. Okay? So minimum
means like the number of servers needs to
be created minimum, it should be always maintained, which means that it will not
scale down from the minimum, but it will scale
up to the maximum. Or you can say scaling out to the maximum and scaling
in to the minimum. Minimum, we can keep it as one and maximum, we
can keep it as five. This ensures that the
maximum number of service which it may create is
five and a minimum, it will maintain all
the time is one. Now, there are two
options here in terms of provisioning model for
your auto skilling group. One is on demand
and spot instance. These are the type
of instance, okay? So we have discussed
enough about it. On demand instance is
basically pay as you go, and there is no long
term commitments or upfront costs
required for it, which means that you create a EC two instance
by going to EC two. So that's basically an, you know, on demand
instance, right? So the same thing is what
we are doing over here. So we need this load to
be running all the time, and this is for a
web content, right? So we need it. But spot
instance on the other hand, enables you to save
90 percentage, which is discount
applied to you. But the only problem is like, it will use unused ECR
resource capacity. But the utilization becomes
higher and, you know, there is no unused space, then obviously you will not
have service to be disposed. So there's a risk of um, you know, not having any
service available for you. So we want to go without
the risk because we need at least some service
to be running all the time, and you will go on
demand in this case. The next one is about the
container instance AMI. What this container instance, there are two wordings we're going to see
later point of time. One is the container and
the container instance. Container instance is basically where the containers
will be hosted. This is really required for you to choose the
instance type over here. I've done this using
this T two micro, but it is very slow, so
I'm going to go with T two large or medium. Medium is much better. So I'm going to go with
T two medium which has two CPs and four GB of RAM. Now over here, I'm going to use create new
roles over here. So if you don't have
anything and it comes by default to
create a new role, then very good. So
just go with this. It's just coming because
I've already tried this, so that's the reason why there's an already a
role exist in this name. So just go with create new
role shouldn't be a problem. Terms of key pair, just use the default keypad
which you have. I'm just choose Tom cad over here because I
have that key pair. I also have Docker Cub as
well, I'll choose that. Whichever we have done so far, we worked with such keypads just choose that and
it should be fine. In terms of network settings, you're going to use
the default VPC and all the subnets available, which means all
available t zones. This five instance can be created across all these
five availability zones. So existing security group, I've just given this just taken the default
security group. If you're not sure about
this, go to EC two. Go to Security Group, and then search for
the security group. Then you understand that this is all traffic security group. All the incoming traffic
is accepted by default. All right. Now that we
have completed this task. Now let's go ahead with the auto assigned
public IP address. It's just basically
use a sub net setting. We don't want to give anything any specific setting over here. Monitoring is optional, and I don't want to go for
monitoring because of the charges it may occur
while monitoring is in place. Encryption, again,
is optional for your managed store as well
as Fargate FM and store. So we're not using
Fargate as of now. Uh, if you want to encrypt it, create KMS key and then
you can choose that, you know, KMS key over here. So that's basically we encrypt the data using the key,
but we don't need it. Right now, we don't
need any encryption. If you have any tags to be
specified for this cluster, you can add those tags
here. Clic on Create. Okay, so now the
cluster is getting clear, you know, created. Well, the cluster
creation may take about five to 10 minutes. So for me, it's
still in progress, and you can see that
the cluster creation, task definition,
service creation, everything will be done
through cloud formation, because cloud formation is basically the stepping stone of every bit of ECS configuration because ECS is small components, and those small components
are run together, and how it is run together
is through cloud formation. So you will see
cloud formation uh coming in almost everywhere. Now that you've
created the cluster, just click on the
cluster and you can actually see some detail about the cluster
in the overview. You'll get much
more detail once it finishes this cluster creation. Uh, you will see services, task, infrastructure,
infrastructure, it needs to provision
that one server, so it's still provisioning
that one server, the um, container instance. That will come up over here
once it has done that. And then metric is
going to come over here and then schedule task. Now, task is basically
how we create containers. This is basically
what we call it as a task definition, we'll
do it in a moment. But what is task definition is basically a task is nothing but a blueprint of
your application. It's basically a text file
in JSO format that describes the parameters which the
container needs to have. I can have specification like what kind of a
launch type to use. And it can have
what docker image you want to run as
part of the container. What are the CPU memory
utilization, statistics, how much you want the CPU memory to be assigned for
each container? How much memory requirements you have that also
can be mentioned. You have soft limit
and hard limit, and then what operating
system you want it to be run, um, operating system, um, docker image, it's all the same. And also it talks about
login configuration so you can use logs to, you know, put it
across to understand, you know, what's
the work going on. And then it could be about environment variables
which needs to be run. You can also set if you
have multiple containers, you can also set the priority for each container
to start first, which one to start
first, likewise. All those things should
be defined in a task. And the task will be
run through a service. The service will call for those tasks and that
service will execute task. So what we have done so far is creating infrastructure for
your containers to run. Now you can see that container
instance is created. This is the instance ID. So this basically will
go to EC two instance. If you see Zero instance, if instance, you will see this instance
running right now. I have previously
tested ones over here. So this one is the one which I created for this
particular example. So if you just
copy this instance ID and search it over here, you can see that matches. So this is the instance
ID it's created. So now the deployment
is completed, right? So now that's why
it is coming up. So this particular
cloud formation is also completed,
create complete. So this is also done. So cloud formation is done. Instance, your container
instance is created and you're ready for you're ready to create a
task definition. So clicontask definition. Create a new task definition. So here I'm creating
a task definition for my HTTP service on. Task. TDP task. And here, you have to select the same one, which is EC two instance rather than Fargate
compute container. Now do remember that, you know, if you select Fargate, then you should
have also selected the cluster type as Fargate, or else it will be confusing. So it's better to
select the same thing. So, it's on EC two
instance over here. Now, this is the
infrastructure requirement for your task where your
container will be created. So now we're not talking about your family instance type and instance family or instance
type, sorry, size. So here we are talking about
the container needs to be. This is the allocation
of the container. So this is the infrastructure
requirement of a container. So here network mode is AWSVPCOperating system
architecture is Linux X 86 64. CPU is one CPU and I'm
going with one GB of RAM. So my virtual mission, medium virtual mission
consists of three GB of RAM. So I'm going to go with one
virtual CPU one memory. Here, I don't have to
select anything about the task role or task
execution, you ignore it. And here you're going to get this one saying that create
a new role over here. This is how it's going to
be, leave it as it is. The next one is about
the essential container. This is the container
which is going to be coming as part of your task. You have specified
the task requirement for this task definition. So if you want to run this
particular task on a cluster, that cluster should
have one virtual CPU, one memory, and it
should have Linux X 86. This is the
requirement. Once you have completed the requirement, it should match the
requirement of your cluster. Your cluster has
three GB of memory, but I've given one
GB as a requirement. You requirement is done. I will validate the
requirement with your current cluster
configuration and it will say
it is successful. Second one is basically
your container. Container is basically what you going to have or
run on your container. So firstly, we're going to say, I'm going to run nginx or
HTTP service or nginx. And then you have to give
a full repository URL. Now, this repository URL
should be taken from something called Elastic Container,
um, repository. So this is the ECR,
you can see that. If you just click on it,
it'll open a new tab. Elastic registry,
sorry not repository. Registry contains all
the repositories, both public and
private repositories. What does this means? This means that if you want to specify nginx
image over here, you can either create a private repository or you can create a
public repository. Okay? So now that you have these two options over
here available for you, then all you have to do is
put that URL over here. But you on the URL right now. So you need to get that URL of nginx from this particular ECR. So to get that from
this public repository, just click on these
three buttons over here. So you will actually
see the full listing or the options under ECR. Under ECR, go to
public repository and repository and you can
create a public repository. If you click on
repositories over here, you will create a
private registry. Here you will create
a public registry. But if you want to see all
the public registries, which is available on AWS already inside ECR,
you can click on this. Then search for IngenX. Now you can see that Engine X has come up and this
is verified account, and this is from nginx itself and it has 1.3
billion downloads. So you can click on
nginx over here. Then there are these
options over here where you will have
the description, you will have the usage
terms and conditions, and then image tags
and everything. Currently, Stable Pearl is the one which is the
current version, you can see the URI
over here already. The URA is here as well. So just click on Copy. And paste that URA in this image URI location and make this as an
essential container. Essential container means this
is a mandatory container, and this is going to be
the default container. You can also add other
containers as well, so you can create another
container by clicking on add another container and add
another one over here. Okay? So the two containers will be created as part of this task. Okay. So now that
this is completed, now I'm going to use
the port mapping over here for port number 80, TCP. And HTTP protocol. So this port number
is added as well. Now you can make the
root file system read only as well,
so that's up to you. Virtual CPU is one. One Assay and one virtual
CPU and a hard limit of one GB and 0.5 GB soft limit. What this means is
that hard limit means the maximum utilization of this process or this
particular container is going to be one GB, and this 0.5 is soft
limit for the same thing. You can add roman
variable CFVIth. Logging is enable
using Cloud watch, I'm disabling it because it
requires um extra money. Here you can enable restart
if you hit exit code or, you know, you can see the
reset period of exit code. You can also use
some parameters or shell file scripts to
run as a hell check. You can also have startup dependency if you're
more than one container. You can also have the container timeout settings over here. Network settings dock a
configuration, for example, when someone logs in, what is the command it needs to run? Which folders you
work or directory, likewise, the entry point. Then resource limits
for UD limit, limit will be default, but you can add U
limit over here. You can add labels as well
as part of this container. Storage or default monitoring is default tax if you want
to you can add it. Click on Create. T should
create your container. I'm sorry, create a task. Now, once this task is
created, you see that, it is using the container
HTTP image from this repository so this is the soft limit and the hard limit we
have given over here. The container would
maximum use one we see virtual CPU and one GB offer total RAM consumption on
the container instance. Now this is pretty
much what we wanted to configure over here and
these are all the items. This task is not executed. You need some service to execute this task right
now. The task is created. For me, it says revision four. For you, it will
say revision one because this is not the
first time I'm doing this. This is the fourth
me I'm doing it. So I did it for some
other purposes and stuff. Likewise, you have name space created automatically,
and you have the cluster. Go inside the cluster,
select the service, click on Create, this will be
creating a service for us. Now in this service creation, we have all these options
which is going to be helpful for us to
create the service. Let's go one by one. So now we're here to
create a service. To create the
service, let me just reiterate the topic
code to cluster. Select the cluster name
and on the bottom, you have a first
tab call service and then click on
Create service. Now once you're in the service, then you can see that
this is blocked out because this is going to be
the name of your cluster. And here also it's blocked
out to say that it is easy to instance because your cluster is configured on
Easy to instance. Here there are two options over here for your compute
configuration. One is capacity
provided strategy, another one is
select launch type. If you're selecting launch type, you can actually go
for EC two because that's the deployment
method for your cluster. If you're providing, if you're selecting capacity
provided strategy. Here this is to launch your
strategy for your task. Now, how you can launch
this strategy is by having multiple um, capacity providers for you. By default, you
will have only one, which is your capacity
provider as in this cluster. So that would be your capacity, the component inside
your cluster. You can also alternatively
combine this with a Fargate service as well and Fargate spots instance as well. And you can also prioritize. So we're not going to
do that right now, we're going to go with the
default one at this program. And then the deployment
configuration is that, whether you want
to deploy it for a task or a service, Um, now, if you are
doing it for a task, it's mainly recommended
for batch jobs, but not for web publication, something like what
we're going to achieve. Because this task is
basically it will run and then gets terminated
and then, you know, comes back, and if you
want to run again, the task based on your
schedule, it'll run it again. So that's like a batch schedule. But a service is a
process which keeps on running indefinitely
until you stop it. So we will choose
service over here. Now, you can select
HTTP task over here. Revision is latest, and this is the latest revision over here. And here you need to give
the name of the service. It's mandatory. Here I'm giving
HTTP HTTP hyphen service. That's going to be the
name of the service. Here I'm going to say replicas, place and maintain
a desired number of tasks across your cluster. I'm going to give only one, this is the only task
which is going to handle. In terms of deployment
option, by default, rolling update is presented 102 hundred percentage as minimum running task and
maximum running task. Is the minimum running
task, maximum running task. If I have the desired
task datas four, and if I have both as 200 or both as 100,
so it doesn't matter. Both as 100, it means it will
start four jobs altogether. It will not wait for one job to complete and be
successful and then go to the next job
as in one task to complete and then it
will not go to the task. It will start, you know,
all tasks together, which will increase the
load on your system, which, you know, if there is a
failure on the previous task, it won't stop it, so
something like that. So if you give a variation of 100 percentage or, you know, a variation like this, you will have one task
starting after another. So that's the reason we
give this variation. We have deployment failure in
which it will roll back to the previous version
if you already have existing version and
you're updating that. Then some of the options
over here in terms of service discovery,
service connect. Here we are going to use the same security
group over here. Load balancing options
are being mentioned here. We are not using any load balancing
option, automatically, it is scaled and then volume is completed and
tagging is all done. This is pretty much and
then click on Create. This should create the service. Now, it may take some time
because it will again go to cloud formation and
it will start writing, executing some programs towards creating the service
for your application. This is going to take some time. It may take about five to
10 minutes again to get all this sorted and get you
to that state of running. Currently, you can
see that there is no progression over here, it has not started
running the task. When you click on the
task, you can see that it is still
in pending state. Slowly and surely it will
start executing the task, which is basically
the task definition which you have
created over here. So this is a task
definition over here. So this is basically
creating the container, which is GDP engine x container
from this repository. So this is pending, so it may take some time
to get this completed. So on task is running,
I think it's done. So just click on it, and you see this container
instance over here. So you have got a
container creator. You see that. It's in
running state right now. Still provisioning, it may take some time to reflect changes. Now, click on Contena Instance, and you will see that
the total instance here. It's public IP address
and everything else on the resources and
networking over here, task, attributes and everything. So it's coming up in here. The container instance state. Let's just code the cluster. We just need to see it running. I may take a while for it
to reflect the status, but the task is completed. Let's see the cloud formation. Still creation and progress. Let's give it some
time. I should take about a couple of
more minutes and then should come in
with a proper status. I'll pause the video
until it's completed. All right, guys, I
want to end this video in this note that we have
successfully deployed it. And if you're trying
to browse it, it wouldn't work the
website on the EngineX if you're trying to browse it
from the EC two console. The instance over here wouldn't work for you
because it's not like, I mean, there are a
couple of fixes I need to do to get that working. So I will do it on
the next video, but if you're really
interested to continue with this journey of ECS and
to get that working, so you can follow the next
video and get that done. But do remember that it's not going to be
part of your exam. So the hands on itself is
not part of your exam. This is just out
of my curiosity. I'm giving you this bits
and pieces of handson Um, so anyway space, I've covered you the
topics for the exam. But if you want to
go beyond that, this video and the
next video will really help you out
to shine on ECS. Now, do remember that, if you're trying to browse this,
it's not going to work. I just try over here, it would work for me
because I fixed it. So this page wouldn't
come for you. You will not be able to connect
to this server as well. So if you right click
and try to connect it, it won't work, but it's
connecting for me. So now, I fixed it. I'm going to show how I
fixed it on the next video. Thank you again
for watching this because it's been going
on for some time now. I want to give the
next video a chance for me to fix it and
show you how to fix it. Thank you again
for your time and patience. I'll see
you on the next one.
67. Labs ECS Setup Part 2 Hand's On: Hey, guys, continuing back
from the previous video, I'm going to fix this
issue one by one. Firstly, for us to work on this. Firstly, you need to connect
to this EC two instance, through a mobile exterm or you have to right
click and connect. Using this connect option. Both of the things wouldn't
work at this moment, it's because of the security
gateway, the default one. All you have to do is if it's already working
for you good, if it's not working for you, just follow the solution
I'm going to give. Either go to this
security group, or you can go from
here as well to this security group or whatever
you configured for this. Both will be the same
because both uses the same auto skiing and both are using the
same security group. Just middle click on it, it opens a new tab. So whatever is over here, just edit inbound rules, select whatever over
here, delete it. Don't look up upon it because it is assigned
to security group itself. Click on Add rule
and then select all TCP and then say anywhere IP version
four and hit the save rule. Now this should trigger your connection to
the EC two instance, and you should be able to
access the EC two instance. All you have to do is
create a new session. In that new session,
put select as a sech, put the public IP address of the server instance ECS
server instance over here, the public IP address,
which you comment here. Then give the advanced
setting use private key. I've used the Docker
Cub PBK file, so that's something
I've done and then put the usenam as ECT, and then you should be connected to this particular server. You can also use
Right click Connect, and then you can also
connect like this as well. I think it should be
working, I guess. No, it's not working for
me as a search connection. So it's not working for me for some reason, but it
should work for you. Okay, so for me,
this is working. So this is enough for me, so for me to work with it. Now, how did I fix this website? I'll come to that now. Firstly, close all the tabs, which is not
relevant to us right now because you don't
need instances as well, and all you have to do right now is to go
back to the cluster. Now, we did some configuration on our task definition, right? So that is actually
causing us some problem. So I'm going to redefine it, and I'm going to show you how to rather than
deleting everything, how to roll back
to the previous, I mean, how to, you know, come up with a new
configuration. So all you have to do is like, select your task definition, modify your task definition now. So select the task
definition here. So right now you will have
only one task definition, so I have different
ones to ignore me. So if your task statements
like HTDP task Ce and one, you just select that. Okay? So for me,
latest is task six, and then I create
this new version. Now this new version will
be based on my old version, which is version six,
HTTP task, okay? So no need to change anything. Just change the
network more from AWSVPC to default, okay? So this is the
modification which is required for us
to get that done. No need to modify anything else. Just make sure that your
host port number says 80, container port number says 80. So this means that
you will be using host port number 80 and
container port number 80. So which means that this
particular a container which I'm going to
have will be using my host port number
of 80 port number, and then a container
port number of ATO here. This is pretty much
the configuration. Nothing else needs to be
changed. Click on Create. Now you should create a new
version of your HTTP task, which is called Colon seven. Now that you have the new
version of HTTP task, now you have to go
back to cluster, select the cluster, select the service and
inside the service, you will have deployments. When you go to the deployments, you will see that
currently it will be running one for
your revision one. So for me it's running six. That's okay. Click
on Update service. And then from Update service, select the latest one, which is seven, which is two for you. Then desired task is one, same every configuration is
same because we are modified the network level stuff
and the port number stuff. That should come as seven
and then click on Updat. Now this will be redeploying so you can see that
currently this is active. And one is running
and one is desired. So when you refresh this data, you should see this becomes
desired one running one, and then it will
become 100 percentage and this will become
zero percentage, and then this will
become active. So that's what we
are looking for. So you can see that server
has branding starting, which means that it is
provision the server. So it will run the
server right now. I will start the server. And once it is in
the running state, it will come to 100 percentage, and then this will go
to zero percentage. May take a couple of minutes to get this complete
process done. Once this process done, then you can see the
deployment is in progress, which means that your
application is getting deployed. What we will do is when we
configure load balancing, we will come back to this again. We will configure another
EC two instance again. That time we will run
multiple instances of your application. So we will have multiple
containers running as well as like we will have multiple action item
running as well, so just waiting for the
deployments to be completed. I'm going to pause the
video and pause it once it completes. All right. Now that's completed,
and you can see that it is primary
100 percentage, and this has gone to zero. Which means that
it has migrated to the latest version for
me, it's seven for you. It could be one as well. Once this is completed, then pretty much your
deployment is completed. You can see that the success message her and all
you have to go back is to your task and you
should see new instances started to run over
here 41 seconds ago. This will slowly shut down. This is the um,
container instance. So this will slowly shut down. These two containers will be actually running in this one. If I just do talk or PSI and A, you can see all these
are exited machines, and this is the one
which is running for 12 minutes for up to
12 minutes right now. I'm just waiting for you could
give hyphen hyphen null. Hyphen TRUNC. No trunk. Now, this will give me the
IPaddress and port number, if there is establishment
of IPRs and port number. I need to see if there's a
new virtual machine created. Let's check that. Let's go to EC two. Let's see if there's any
virtual machine created. Now you can see that there is another virtual machine created. The Otis killing group has created another virtual mission, that instance would
be running in this virtual mission.
That could be the reason. I just have to duplicate this
and then edit the session, change the IP address. Bookmark also, I need to
change the IPaddress. All right. So now let's look
at this instance right now. Docker PSF and A, F and F and O. Drunk. So I'm in a
different instance, right? No, I'm in the same instance. That's weird. I'm
gonna try again. It's 181, two, and this is 18. Okay, it's the same one.
Let me copy this again. No, it says something
wrong is happening here. Okay, it's not letting me
because I've reached my limit. That must be because of that. Okay let's try to letting that. I'm not sure what is going on. Let me try again. One more time. Seems like this is this
time I did the right thing. It's because of the sessions
I'm able to occupy. That must be the reason
why this was happening. So let's see do BS Ip and A. Okay, so now I have
the new image. Let's see, Iphone no truncation. Let's see. Now you can see that it is listening
on 80 port number. If I do at Start I have
an AN grab of Listen, I should see 80 port
number running here, you can see that A port
number is listening. This system, your
container instance doesn't have any application
running on AIPort number. It's just the
container is running the 80 port number
and it is projecting that 80 port number through
your EC two machine. Which means that I should see I should be able to browse this application with this IP address over here. Right there. That's how you need to browse
your application. You need to know where this
docker instance is running, and then you need to
execute it there. That's pretty much
what I want to show you and this is how
you can get that done. I've shown you
deployment as well as part of this particular video, how to deploy a newer version
of your application of your task or your container
to be on your application. That's pretty much
also I've shown you I just waited for some time and then it's turned off the first instance. So, you know, it closed the auto scaling closed
the first instance. The second one where, you know, the image is the talker
image is running, so that just kept running. So um so this URL still works
fine without any issue. The other instance which it
had was terminated again. So that's because auto
scaling break down the instance size 2-1 again. So that is the reason
why it was not deployed on the
other node because it knows that it's
going to bring it down. That could be the reason White created a new node and
deployed it on it. So anyway base, that's the
autoskilling decision to make. So this is how we need to get your ECS all your application working and testing
working as well. Thank you again for
watching this video, see you in the next one.
68. Labs EKS Pods And Deployment Hand's On: Hey, guys, welcome back
to the next video. In this video, we are going
to work with AWS and, um, AWS Communities service, and we are going to
create Engine Export. But before we create
Engine Export, we need to first connect to
the existing AWS service. From the command line.
There are two methods. You open the AWS CLI
and then use AWS. There's option
where we configure AWS using the
security credentials. I've already shown you
as part of the previous. Is one method of doing it. Like faster method
of doing it is basically go into your console and opening the CloudShell. Now Cloud Shell is one of the easiest method
of working with AWS CLI and all these uberntis because Cloud Shell
will automatically has the Kubernetes and
Cube CTL commands already in build so um, thinking about that, you'll
be able to work with this. All I have to do right now
is open this in a new tab. By default, Cloud Shell comes
with a smaller font size. So if you go to the
setting over here, you can change it
to large font size and light or dark mode,
you can change it. Currently, I'm in
the largest size. Please bear with me if you're
not able to see it clearly. I'll type the command, which is first command
you need to type. You don't have to
configure any of the credentials because you already in your account
and from your account, you're opening the CloudShell. So now all I have
to do is run AWS, EKS, IP and IPhonRgion then
say which region I'm in. I'm in USE spawn. And then I have to give update hyphen Cub config
hyphen hyphen name, and then I give the
name of the cluster. Here I go over here
to get the name of the cluster and
paste it over here. That's the name of my cluster. Now when you do that, it
will add a new context. Now the context
is created and it knows that the EKS cluster
is running with this name. COV on your account number. This is your account number,
cluster, cluster name, and this is SEs one because this is a region
which you're using, and it is ARN AWS EKS. So this is basically it
has formed a URO or URI, and it has mounted this URI into this config file in
your home directory. Now, Cup control,
get pots phen A. Now when you do this command, now all these pots
reporting is all coming from your communities
cluster which is over here. If you see this one, it
will be the same thing. These are the pots which
is showing over here. Now, let's create
nginx configuration. So let's go ahead, create a deploy
nginx, deploy to TO. In this we will
copy paste stuff, just take it from the notepad. Um, there's a problem in VI, rather than VI use cat greater than symbol
and then paste. And then do Enter
and do a Control D. So that will save quit the file. There's a issue with
the VI's basically the formatting is not coming
correctly in Cloud Shell. So anyway, SCAT is
there to help us out. So now create a service
ph and deploy dot GMO. In this service, we
will copy paste it from the service file
and then Control D. Now you will have
two files created. Use Coop Control, apply FNF
to deploy both these files. Oki. Engine deploy,
deployment is done, and then service deploy, deployment is done.
Now, Cop control. Get pots, deploy service. Now I have two pots created on default because of
the deployment, Deployment has two out of two
because we have said here replicas has two and it
has the engineXimage. The service also is deployed. One is the community service, that's a default one, another
one is the engine service. Now if you see that, we have deployed this service
as load balancer. Which means that it has given the external IP
address automatically, and it has given the URL
of the load balancer. Now, just to make sure
that this is working fine, just copy the URL, put it on the web server, and then just give
antaTry and it may take a few minutes for it to
replicate to create the cluster, deployment of this Cubonuts,
just pair with it. So if you go to all name space, or you can go to default, you'll see that new
two deployments, which was gone through right
now in the POT section. And if you go to
service and networking, if you go to service, you're going to see the enginx
service over here. When you click on the
Engine X service, you can see the
load balancer URO. It's still not available, so just give it some time, and then let's try to browse it. It should work fine. You can
see the endpoints over here. Endpoints looks
good, so it may take some time for it to deploy.
That could be the reason. Or else it is looking for HTTPS. Let's try HTTP. That is also one of the
reasons why it's not working. Let's give it another
couple of minutes and then we will try it again. I'll pause the video and pause
it once it starts working. After a couple of minutes, you can see that
you get this page. Welcome to EnginX. I
didn't know anything. I just took a bit of time because it has to create
some DNS service as well. And this CB DNS service
came into existence. After that, it started to work. So this is also not
created by myself. This is autonomous service which gets triggered when
you create ports. So now you have that particular
URL working as well, the EngineX service over here. So the EngineX service
where click on it, make sure it is
not HTTP as it is HTTP because EnginX is
running on HTDPPot number 80, and this by default
goes to HTTPS. So if you just copy the link paste it and then
changed it to HTTP. I should work fine.
That's all folks. Hopefully you understand
how to create instance on your ECS and how to have them
executed using Kubernetes. Thank you again for
your patience and time. I will see you on another video.
69. Labs EKS Setup Hand's On: Hi, Grasp, come back to
the next on this video, we are going to talk about
Elastic Kubern service. This is one of the topics here
which we are covering EKS. So this is a hands
on session for it. Again, this hands on session
has nothing to do with your actual AW certifications, but this is just for
a hands on oriented, you introduction to that. Here we are going to
host EngineX service on Kubernts now to do that
to host your sorry, nginx service and to
run it successfully, we first need to search for EKS. And you have a separate
service for EKS over here, and then I'll click
on the EKS service. Now, do remember
that this is again, there are a couple of
parts over here on this particular um video. First part is that
we're going to create a Cubons cluster. Once we have created
the cluster, we are going to add nodes to it. Just like how we add nodes
to your existing cluster, on your coberats same way we go by adding
nodes as part of it. Now there are two
ways of adding node. You can use a Fargate. Again, Fargate is used for a serverless nodes or you
can use EC two instance. Firstly, we're going to cover
the EC two instance and then we're going to look at
Fargate data point of time. To create a cluster, once you've accessed the EKS service, you have the option
of cluster over here. You have another
option where you have EKS anywhere and
ECS anywhere as well. So those is basically a
subscription based item where in which the
advantage of this is like you have to pay
for a subscription, and you will get 24
bar seven support from AWS sorry subject matter
expert SME on EKS. This basically is
part of your package. So this is, um, a subscription based package. It's the same thing is
available in ECS also. For Elastic antenna
service as well. We are not interested
to look for this because when you know how
to configure a cluster, this is pretty much
the same thing. Let's go ahead and
create a cluster. Let's create a cluster
over here and we will name this HTTP EKS cluster. Now we have to create
a IAM role over here. Now we already have
this EKS Admin role. If you don't have
EKS Admin role, just click on this
console for IAM, and then here you can select, here you can select AWS service, select EKS from here and you
can select the EKS cluster. This is automatically get
selected because this link uh, will be redirected to that
specific requirements. You can see that
that's very much the actual requirement
it is directing it to. So when you click on next, you can see that
particular policy is also loaded as part
of this because this particular
selection will come with this policy and then click
on next and then give a role name over here and
then click on Create Roll. But do remember that this
role must be already here so which means that you
should get the CKS Mendroll. If not, just create
it in the same name, and you will be able
to see that over here. You just have to refresh it over here and you will see this role. So roles are important for services because as
we discussed earlier, roles is for services, which is getting access to other or accessing other
services within AWS. Here is the uberts version. Current version is one. 31. And there are two types of support available for this
particular version. One is a standard support
which expires in 2025, which means that you
need to upgrade to the next version after
this particular timeline, and the one which is
called extended support. So this is available to 2026. So the gradual is 14 months and 26 months
for extended support. 14 months, you don't have
to pay anything extra. There's no additional cost. But if you go for 26 months, you have to pay extra here
where you will be actually extending the support
on the end date the cluster will be auto
upgrade to the next version. Likewise, you need to pay additional hourly cost
will be charged for this. I'll go for standard one. Cluster axis over here is basically for accessing
your cluster. Allow cluster
administration action, the default option,
so go with that. Here for cluster
authentication mode, this is for configuring
the source, the cluster which you'll use for authentication through IAM. We'll use the EKS API, which is the default
option over here. We're not going to turn on
any encryption over here, so we just leave it as disabled. Then we have the
ARC zonal shift. Now, ARC zonal
shift is basically, um, you know, it is basically what we call
application recovery controller. So the zonal shift happens, to make sure that you have availability even there is any disaster on your
particular zone. You can enable this, but I would rather
disable it because this, again, is additional charging, so I don't want to enable it at this moment because it's
for testing purpose. Why do we do this? Zone and
shift is used for setting up the cluster environment
for resilience in the availability zone
failure beforehand. In case of availlabilty zone, it will automatically shift it, but very rarely happens, so I'm going to disable that. Clicking on next option. The networking
option is over here. These are the subnets
which we're going to have our clusters created and these are the IV version force
we're going to select. Security Group, we will select the default one which we
have already modified. In the previous video, if
you worked on the ECS setup, and the second video following
up on fixing those issues, now we would have created
this default security group. So, you can use that. Or if you want, you can
create a new security group, give open, um, you know, all traffic, and then you can just select that
security group as well. Here Point access is
public and private. Click on next. The next
one is the observability. Observability ensures that you have options over here
where you can send your data of your cluster
information to prometheus. But again, anything which you select over here
is extra payable, be careful what you're going
to select because again, these are something
which is extra payable in terms of working
with these action items. I'm not going to
enable any of it. I'm going to enable the
default ones over here. CDN, CNI, Cupxy and then
EKS prod identity agent. So these are something
which is default. So if you want to
select this again, this is in charge incurring one. This is a guard duty for
EKS runtime monitoring. So don't enable anything which is extra,
just click on next. And then this is basically the version which
you need to install. So it is already chosen
the stable version, so you can ignore choosing any versions for these
extra components, so you can click on next. Now, here we are on the summary page and
here we're going to review and create.This is pretty much what we
have selected so far. Hit the Create
button. I'm expecting some exception to
come up right now. Over here, it says
HTTP EKS cluster cannot be created because EKS does not support creating
control plane instance on. Um, US East one E. It
says that in this region, you cannot create control plane. So that's very
straightforward ways. Which means that you can create it on any of this except for, um, uh, one E. So that's
what it is saying. For example, then we need to go to the network setting
and click on Edit, and then you need
to find out that E this particular vaibity zone, remove that availability zone and then go next, next, next. Then click on Create now. Once you have removed that
particular availability zone, you'll be able to create it. This is a standard procedure. If you're working with
any of the regions, one available t zone, you cannot actually create
your control plane. So do remember that that's something which is
standard for AWS. Now the cluster is
created right now, so you can see that the cluster is currently being created. This process may take
about two to 5 minutes because it took
about 2 minutes for me and I get some details over here once I have the
cluster completely created. Once the cluster is created, I'm going to pause this
video once it is done, then we will be doing the
second leg of this video. The second leg is
basically configuring the node and we
are also going to explore some of the
concepts which is part of your EKS cluster in
this console as well. I'll pass this video and
pause it once it's completed. So the cluster code
created in 10 minutes, so let's get inside the cluster. This cluster has
information, for example, its date and the version of the uberuts also
the provider EKS. It also has the API
server endpoint to connect and work with
the API server endpoint, as well as open connection
provider as well. These are methods of
communication and authentication. So these are listed over here. And there's no health issue, there's some setting over here. The version setting is standard. That's what we have selected, and encryption is off and there is no
zonal shift as well. In terms of resources,
this is very important. What we see on the
Cummints and command line, you can actually
see that over here. You can see, workloads, you can see, um, pots, um, replica set, deployments, state
full set, likewise. You will see the
notes right now, uh in this notes section. But you don't see anything because we not
configured any notes. That's the second part
we're going to go to. Name spaces, and then
API services, leases. Likewise, you see
some default stuff. So there is already
the cluster setup. So there are certain
items which is pots configured in the Cub
system namespace. Those are the co DNS services. Now it is all waiting for
us to create the nodes so that the rest of the items
will start populating here. So to create the nodes or to add what we call
it as a node group, when you add a node group, nodes will be automatically created because there's no
option of creating nodes. These are the nodes. There's
no option of creating nodes, but you can create
the node groups. Now, either you create
node groups and then or else you create the
Fargate profile over here. We will talk about this later. Firstly, we will do
the node groups, and then you will see the
nodes getting populated here. So when you click on Add
node group over here, you have to select the um, type or the name
of the node group. Now I'm going to give HTDP
IP and NG as a node group. Then here you won't have any EKS policies
with you right now. I have it because
I've created it, I did a test run to make sure
that everything works fine. So you will not find
anything over here, just click on create a IM
rule, when you click on that, you will get the AWS service and then it will go to EC two, the change it to EKS so what I'll do is first let me
delete the one which I have that much better. I'm just deleting the
node policy role. I'm just going to do that. I've removed it. Now, if
you just refresh this, you shouldn't find it, it's
going to be like this. Then what you do is
create a new role and then EKS then there's
option here for node group, select that, select next
and then you will get a default name over here and
then click on Create Role. Now, it says that it's
already been taken, the name already exist. But even if you create
it at your end, you will not be able
to see that here. This is a specific
criteria here. You can actually
go to this learn more and then understand the criteria in
much detail manner. But I know the criteria, so I'm going to create a role by myself. Now, create a role. Click on Create Role and then do not select
anything over here. If you do, it will be
pre selected policies. Permissions will be applied. Click on any of this and
then click on EC two. Let's not do that. Let's
just go over here. We do it beginning. Let's click on
Create Role and then just select AWS service
and click on next. It's not allowing us, right? Hold on. All right.
Let's do this way. Just click on Create Role, select Custom Trust policy. And then we'll just
go for the next one. And then here you add
four services over here. EKS services search
for EKS policies. I'm sorry, not
services policies. So EKS policy, you
need to select EKS the one which I'm going to select is
the EKS worker node. So worker node policy,
this one over here. You can also select the
minimal policy as well. So A the one is fine. So if you select
the node policy, minimal is not needed. So A the one is fine, so I would go for
the node policy. And then you can
select the other one, which is the EKS, CNI. Sorry. Sorry about that. So CNI policy over here,
you can select that one. And then you can pretty much select
another one over here, which is EC two
container registry. So here you can select
Container registry, read only access,
and then click on next and then click
on Create Role. Now, give a name here. Es node policy. And then I just click
on Create Role. I say my principal
name is missing. Let me see if this is
the right way to do it. Just give me 1 second. Let me copy paste this. Let's see if I'm
able to modify this. Yeah, so I will paste put it on a notepad and put it on what I copy
pasted just now. So what I copy pasted
is the trust policy. So ECS note policy has
this trust relationship, and it needs to have the
principal name like this, saying that this is a service
E to aw amazon aws.com. And then it needs to
say STS assume rule. This is the trust relationship which needs to establish
with this service. That should be part of your node policy because the node has to connect with the EC two
instance for communication. Do remember that this is mandatory, that you
should have it. If you don't have all
these three policies as well as this relationship
with the trust, it will not show up over here. Now you can see that EKS
node policy is coming up. So just check the note. I will put the note on the
same video below the video. And then you can access
that and, you know, you can copy paste that same
trust policy into this um, on the rules as well. Now that you've completed that, now you can use
the default stuff. You don't need to
modify anything. But if you have any labels, stains and tags, you can put it up across here and
then click on next. Now here we are choosing
what kind of, um, workload um, you know, workload distribution
you're going to do in terms of compute
and skiing option. Now here we are choosing
Amazon Linux two, and we are choosing on demand
instance rather than spot, and we are selecting
T three medium. As our family and the type
of the size of the instance. Instance type we are
choosing over here, we're giving a disk size
of 20 GB and we are creating two nodes here
as a desired size, a minimum is two and
maximum is two nodes. Maximum unavailable, it says at any case or
any scenario, you know, if there is going
to be downtime or if there is going to be an
upgrade on your version of the current version of
uberts how many number of nodes can be unavailable
or can go down. So you've put number
over here and given one. You can also give percentage
like 50 percentage. It also means one, if you're
going to go by percentage. Now here one, which means that one node will go down
and then it will go to the upgrade process and the other node will be
upgraded later point of time. Click on next, and these are the subnets your
notes are going to be hoster on and um
just go forward with Um, just give you 1 second. What I've done is like
I've added the E one as well over here so that
my nodes can also be, you know, created
on the e1e node. So one E availability zone. So just go over here and click
on Create. There you go. So the one E does not matter if you're going to
create nodes on it, but it matters if you're
going to create luster on it. Okay? So that's the
difference between a cluster and a node group. So now that the
cluster cannot be created on one E a
variable T zone, but nodes can be created
normally on one E as well. So that's why I've
selected that. So now I'm in the node group. So let's go back to cluster. And let's go back to
compute over here. This may take a
little bit of time, and as you can see,
that's processing. And if you open EC two
instance, in some time, you will start seeing
instances created for your node group under this
cluster, ECS cluster. So the EC two instance
will be created. If you're going towards Fargate, Fargate will be creating a
serverless enrollment for you. But as this is going
to be on EC two, it's going to be on
EC two instance. You will see
instance popping up. This may take another
five to 10 minutes for it to start the instances, do all those deployments, pot creations and everything. I'll pause the video and pause it once this
is all completed. This is completed and as
you can see over here, I have two notes here configured as part
of this node group. Now, these two nodes are going
to be working with me on all the applications which I'm going to deploy
to covenants. And these are going
to be deployed on. In terms of resources, now when you go to
workload and see pots, you'll see that a lot of
pots are being created. Now, all these parts are part of the Cup system
namespace which is nothing but your core systems, namespace pots over here. You also have nothing
on default because it just something we need to
create on the third part. The third part, I will
put it on the next video. Uh, where I'll be creating
pots for our nginx, and then we'll be browsing
the engineX service through this port cloth EKS system. Next one is the replica set deployments and
all those things. There's nothing on default, but if you go to Coop system,
you will see something. You have the cluster
configuration which you have the nodes, which is part of the cluster. And then the name spaces, which is available and all those items over here in terms
of service networking, config maps and
secrets, storage, authentication, authorization,
policies, extensions. You're going to get
all those things, which is the Cubants
offering in AWS, but it is much easier do manage because you will be seeing
that all on one console. But if you ask me what is
the best way to create pods? Because there's no option of you to create a pod over here, there's no option
to create a pod. Which means that you need
to use Command line. In the next video, I'll show you how to work with Command Line and to create pods using this particular
command line tool. Thank you again for watching this video. I'll see
you on the next one.
70. Labs Fargate Hand's On: Hey, guys we next video. In this video, we are going
to talk about AWS Fargate. We're going to configure
ECS using AWS Fargate, that's pretty much what we
cannot do in this video. So let's go ahead. I'm already in ECS service and I've already
clicked on the cluster, and you can click
on Create Cluster. Now, you'll be actually looking
at me creating Fargate. I mean, the instances through Fargate and you will witness how easy it is to do it when compared with
the EC two instance. So now I'm going to
create HDDPiphenECS. Fargate hyphen ECS. I'm sorry, let's just end it there. So this is HTPiphenS
hyphen Fargate. So this is basically
the cluster name. And here, I'm going to use Fargate over here
for server less. So I don't have to give
any kind of details about the EC two instance type
or anything like that. So I can skip all those
things because Fargate will basically run, you know, run all the instances which you have and it is very useful for bust workload and tiny batches and workloads like web
servers and stuff. But it's highly recommended
to go for EC two instances if you have a consistent
and large working load, but this is just a demo, so I'm just going to
use Fargate over here. Do remember Fargate is not free, so there may be charges applied when you're
going to go for it. So just click on Create. Now this is going
to create a cluster over here in some time you will see the
cluster coming up. But let's not wait for it.
Let's go to task definition. Crenew task definition. This is the task definition
for HTTP, Iphon gate. Iphone task. So here, I'm choosing Fargate
as the launch type. And here, the network mode is disabled and it is coming
as suitably as VPC, so you don't have to modify it. So the instance
infrastructure requirement, I've given the default ones. And here on the
container details, I've given the container details like the ginex container name, and then the URL RO, you have to get it from here
from the container registry. GeneX copy, paste it over here, nothing else to be modified. Port number, container
protocol, port number. I think it's 80 as well. That's basically the port
number sorry port name, sorry, DPI and 80.
It's optional. You don't have to really
give it, so it's automatic. Going to disable
the Cloud watch. And then I'm going to create. Now, this has created
the task for Fargate. Cluster creation has completed, so you can see the
cluster as created. Let's get inside the cluster. Let's create a service. Now here, this
automatically says AWS fargt and the name
of the existing cluster. In terms of capacity
provided strategy, you can see that Fargate
is selected already and there is no way of
using the default one. Can also add another capacity of spot instance and
give more weight. So I'm giving five to one. I mean, I want more instances to be created on
Fargate spot instance, but also I need at
least one instance on Fargate to make sure that some servers are available permanently and some servers are available whenever
there is a requirement. So I'm going to create 51 is two where more priority is
given to spot instance. So the latest version of Fargate is going
to be used here. So that's the latest version, and I'm going to use
service over here. Se fargateTask. And then here the service name, which is going to be um, HTTP. Fargate phone service. I'm going to create
one desired task and then I'm going
to click on Create. Now, this would have created
the deployment process, and this is going
to take some time to get the application deployed. Once the application is
deployed to Fargate, I will come back and show
you how to browse it. I guess almost all
task is completed. Let's go and verify. Go to your cluster, cluster name and within
that go to service. The only way you can
get the IP address of Fargate if you follow
the same instruction. Once you're on the service, go to task in the task, you will actually
see the task name, when you click on you will see the Fargate
details over here. This is basically
of Fargate details, which is part of your task. This task is running, and this has created a
Fargate spot instance. As we have given more
priority for spot instance, so it has created spot
instance, but do not worry. If there is a
problem, if there is a issue where the resource
is not available, then it will actually create
the instance on Fargate. Because we have
given both options as part of the balancing. Fargate spot gives you
more discount because it uses the available resources
which is not being used. That's pretty much why it is saying Fargate spot and
that's why we have given more priority or more H to Fargate spot so that it will create Fargate
spot instance. Now here's the
public IP address, copy the public IPRress
pasted over here. And you see that welcome
to Engine ex page. This is pretty much how you can configure and how you can browse your ECS instance on Fargate. Thank you again for
watching this video. I will see you on the
next one and before that, if you see the EC two, there'll be nothing running over here. There'll be no EC two instances running because this
is running at Fargate. Fargate is a serverless
computing service, which means that it
is running without any interface. Thank
you again, guys. I'll see you on the next video.
71. Labs Lambda Hand's On: Is welcome back to the
next video in this video, we are going to do hands
on session on Lambda. Now, do you remember
that this is a very short simple session
on Lambda using PHP. We are going to print
hello world using Lambda. All right, so to do that, you need to be in
your AWS console. Once you're in AWS console, you can go to the
service called Lambda. So this is a separate
service available for you, middle click it so that
it opens in a new tab. Once you're in that tab, you can actually see
the options you have. Now, primarily, you
will see what it is. It's a compute. So it's
kind of a compute power. So it's a serverless compute. It is critical for your small load of coding which needs to be
run on the background. So now you can get started by creating
a function over here. You can also run a test function over here
using different programs. So basically, how it works is you type in your program
and then you run it. So now you get the
message over here. So the code which
you are going to type is basically Python code. So select that Python code and copy paste the values
which we have out here. So I think you can copy
paste it, I guess. Yeah, you cannot edit it. So this is just a demo stuff, so you just can just run. So what we're going to do is we're going to
create a function. We are going to invoke
the Lamb Do function. There is some free Lamb Do calls here given for you
for you to try it. You can see that first 1 million trans request per month is free, and first 400 GB per
second per month is free. After that, per
request is going to be 0.20 per 1 million and per GB, it's going to be $16. You can see much
more in detail about the Lambda pricing over
here as well as you can use the pricing calculator to understand how it's going to work on your global
infrastructure. Now, click on Create function. Now this should enable you
to create a new function. Now you can author from scratch, which means that you can create your Lambda function
from scratch where you don't have already existing blueprint or a container which you need
to deploy through Lambda. So here is your blueprint. Here you can actually use an existing blueprint
and then you can execute that blueprint
as part of this. Likewise, you have
the standard ones where creating microservice, hello world function, and then some API interaction
is also given over here, and then even streams is also given over here
and ID automation. Hence these are some of the blueprints which is a variable, which you can modify the existing blueprint and improvise it according
to your requirement. Os you have the container
image over here, you can give the container
URL and basically runs the Lambda function
would run a container, and the logics in that
container would be executed as part of this ambdaFunction. Or else you can actually
use another use the author from the
scratch where you can build your complete Lambda
experience from the scratch. Let's closest to Torio. So let's go open a
function name over here. Your function name as
Hello World function. So now this function is going
to be just for hello world. Now here, you need to
select the runtime. Runtime is basically where
this function would run. So if you are decided
and you have a code on PHP and you are trying to run it on nodejs, it's
not going to work. So um, not PHB, sorry, Python. You have Python code and you need to select the
correct appropriate, you know, runtime
environment for your code. So you have Java 11, Java 17, uh, dot net
and other framework. Which you can actually use here. Do remember that, if you have Java coding, you
can execute it here. You can use it in application for executing the Java program. But the thing is that
it's a bit expensive because it's going to charge based on the number
of requests coming in, so it's going to be a bit expensive because
it's serverless. Select Python over here, 321 sorry one, two version, and then select the
architecture which you want to have this code running on. So basically based on
this architecture, your entire Lambda
function be created. There are two
options here, ARM or Intel in this situation, I'm choosing Intel and here
are some DFO permissions about uploading it to your Cloud watch and for
your logging purpose, and then you have
the execution role, it's going to create a new role for basic Lambda permissions. Then if you want to enable
code signing function, you are also likewise,
the third party services, which is associated with
this Lambda function. You can enable it. We don't
want to do that right now. Let's go ahead and
create the function. Now what we've done
so far is that we have named this
function we have said, what is the runtime
is going to be? This function is going to be. And in this function creation, so we are actually deploying
the code over here. Now, this default code already
comes with hello world. This is pretty much
what we want to use. So this is basically
printing hello from Lambda, so you can change it to
hello world. As well. So pretty much the same function we want to use for
this Python file. Now, once this is done, if you have a custom code
which needs to be uploaded, you can also upload it as a zip follow from S
three location as well. Now, all you have to do is once this is created
successfully, you just have to, um, you know, look at your
configuration file. Currently, you have
only one Python file which is going to be
running over here. Next, you have, you know, you undeployed changes, so
you can just click on this. Firstly, you can test this. You know, if you want to test this with a test
event, you can create it. Let's not test it
with a test event. So let's just
deploy it directly. So you can see that deploying code and then deployment
was successful. Now pretty much we
are done with this, we have deployed this function. Now we need to do the
testing for the same. For testing this for testing this very
simple from the top, you will have this option of the configuration site there's
option for you to test it. Now we need to
create a test one. That's the reason I told you, let's hold on test one because we have to
create it anyway. Test hello world. It's going to be the name of it. Now here, the even share
is basically for sharing this event only for the Lambda
console on the creator. We're going to stick
with the basic ones and here is the template we
are going to use for the testing and then
hit the test button over here and then the
even flow was successful. When you click on the detail, you can actually see the
even flow and how long it took for it to test
this function. So pretty much what we wanted to see is
this area over here. This says that this is a four KV execution log and
it has executed successfully. This test has been successful. There's also another
option of testing this. This is just using your
you were able to test this using this GI console. You can also test it using the command line as
well, the CLI method. So when you have AWS
configuration installed. So this is your
monitoring session. So the one data would come in in some time because the
AMTA function is pretty new. So the monitoring would take some time and it would
come in over here. And this is your
configuration of, you know, your Lambda function. So lays and version
of your function. This is very much very
much introduction to Lambda function. We're not going to
go in any detail here talking about the features. So do remember that.
Another way of executing this is
by your CLI method. In this situation, I'm
going to use the CloudShell to test run this
particular function. Let me copy the function name, so it's going to
be useful for me. You can give a command over
here, AWS Lambda invoke. This is going to invoke a function by giving
function phon name, and then the name
of the function. And then I'm going to put this into a text file
called output text file. Now just get this output text. You can actually see the
world which we've created. This is pretty much
in successful test, which you can also run it
from the CLI method as well. Thank you again for
watching this video. I hope it was useful for you
to get started on Lambda. Thank you again. I watch I
will see you on another video.
72. Labs Other Scaling Policy in ASG Hand's On: You guys welcome back
to the next video. In this video, we
are going to look at what happened yesterday
before we proceed further. So about 20 minutes later, 25 24, 23 minutes, I guess, 24 minutes later, you can see that it has
increased the capacity 1-3. And after that, when
it realized that the load has went down
after 23 minutes, right? So it has decided to shrink the capacity 3-2 and then
two to one eventually. Uh, in a course of
1 minute interval, it was reduced 3-21, two to one. I guess this 20 minutes, 23 minutes includes the
time when we canceled the load on the system from performing so that
inclusive of that. So maybe I think roughly
about ten to 15 minutes that would have triggered
the reduction of scaling in in terms of the
technical word scaling in reduction
of shrinking the capacity to the
desired value of one because that's the
minimum capacity which is required over here. That's the minimum
capacity desired, so it goes back
to the one state. Now, let's look at the
other two different types. You won't be not
configuring them. I mean, it just like it's almost similar to
dynamic scaling policy. So if you understand
how to do this, it's very easy to do predictive and scheduled
action as well. Let's look at the
predictive scaling policy. This is as similar as a
dynamic scaling policy, but it has one neat feature. This is called um, scale based on forecast. Now, what it will
do is it will look at the forecast of
your application, the past increase in terms of your auto scaling,
um, mechanism. And then it basically
predictably scales will forecast capacity. And the scale will trigger based on whatever it has
previously experienced. So it uses that, um, in terms of understanding that. So it will not be scaling action based on prediction
if it's turned off, but if you turn it on,
the scaling will be, you know, um, turned
on a given time. So this will be using a
predictive information which is forecasted in your past and it will use that to do that. You can disable this and you can still go for the CPU
target utilization. So it's the same as
the previous one. So you will see the CPU, the network in the network
out the load balancer. So these are four items which
are already on dynamics, so it is simply the same. But there is one extra metric
over here custom metric. Now, custom metric
can be triggering a specific Cloud watch metric or you can use the default
ones which is out there. And then this is called load
metrics and scaling metrics. So how you want to utilize
the burst predictor of this. So average CPU or is
like a total CPU. Likewise, you use the
custom metrics for it. So when you select
CP utilization, this is as equal as what we
did on the dynamic scaling. And then here, one
of the advantages or one thing which separates
from the dynamic is basically like pre
launching instance. Now, this you can give a pre launching
instance capability where before this
is a prediction, right, or it is a
forecast, right? So it basically looks at
the forecast and then have these machines ready 5 minutes before any kind of a
forecast is being triggered. That will give you an edge from the dynamic because dynamic
has to get your service ready when it looks at a load or a specific
breach in your metrics. But here, it will use the
forecast to have this ready, the instance ready 5
minutes before or, you know, which was the
timing you would give. But the maximum of 60 is
what is specified over here. So you can have an instance
ready before 60 minutes. And have that buffered. So you can also enable
buffer maximum capacity. So it will actually have maximum capacity of whatever you have mentioned on
your auto scanning, so it will buffer till that. So this is one option over here. Another one is scheduled action. So schedule action is
basically you say that I need this capacity by
this particular, you know, every 30 minutes, every one day or every day in a week or every week in a month. So you specify what you want. Terms of recurrence, you say every Friday I'm
getting a huge sale, and I will say, okay, I want every day in which
where you can specify the start day over
here or Friday and then you can set time
over here on that. Every day it will recover on that specific day of a
month or a week or a month. This means that
every Friday here. If you set here every
Friday in a week, I'm going to get this
one started from 00 con 00 from midnight of my Friday
till the Saturday morning. So it's like zero, zero, zero, zero or 23 59. So this will say that from the Friday night
Friday midnight to, you know, Saturday midnight. So this will be basically a schedule which runs
on a specific capacity. For example,
currently, it's one, five, and one, right? So you can say, for this
timing, I want five. All five should be available. And by this time, it will reduce and go back to the
minimum one, which is one. So that's what we are saying. So here you just give test
and then create this. So this will be actually a
schedule where in which you can also have the
dynamic skilling policy as well as Schedule one as well. So what will happen is if the dynamic skilling policy a maximum of five and
you've scheduled this. On that time frame,
it will become five. So this becomes inactive, so it becomes like, you know, something which cannot be uh, satisfied because you already at the maximum at this
point of time. But if you have configured a maximum of ten instance and this one says
five over here, and this one says ten, by default, it will be ten because that's the
maximum it is. But here you say only five. I will look at the load on these five systems and then
basically increase it to ten, it makes sense for having a dynamic policy as well as a schedule
action saying that, you know, in the days, I want five as a maximum. And then basically
this dynamic policy will keep tracking
your CPUtilization. If it goes beyond that point, then it's going to spin up
new instances or still ten. So that makes sense if you have a ten instance and you're going to configure five over here. So that also makes sense. So it's about the ratio or
how the load is going to be, what you see your situations and how much money you're
willing to spend, your customers willing to spend. All those things depends on
your auto skilling group. All right. Thank you again
for watching this video. I hope that this helps you
to understand auto scaling. Please do leave your review. Your review means
everything to me. I'm just trying to
work hard day and night to get you this
kind of hands on oriented approach and tell you the real time scenario what we actually use on our company
as well. Thank you again. I will see you on another video.
73. Labs Types Of EC2 Instance Hand's On: Hey, guys, welcome back to
the next video in this video, we are going to talk about one
of the hands on over here. It's about the different EC
two instance types over here. We're going to understand and compare some of the
Easy two instance type. Open or AWS console and
open EC two instance. Now, you should find the
EC two instance over here. So here, just click on instance type and you
should get to this page. Now, if you select
any of this instance, you're going to get some
detail about this instance. For example, it will tell you the family
name and the size. It will not tell you whether
it is a general type or a compute optimized or
anything like that. You just need to figure it out yourself by the family name. What you can find towards the
details over here is that you can see the type of
instance family size, and then what hypervisor is used for creating
this instance. You can also see SM supported
root device like EBS, Elastic Block Storage will
be the root file system. The core operating system
will be installed on EBS. Whether it is a dedicated host or on demand
hibernation support, likewise, you will see
some details over here. Then here you see some details
towards your computer. You can see the architecture. You can also see the
memory size over here, how many virtual CPs
would be assigned for this particular virtual machine. You can also see the
networking over here. This could be part of any
of the available T zone, and you can see the network
performance is very low. This particular machine
is D one micro, so that's the reason why it
is all low configuration. Here, on the storage
information, you can see that
Elastic Block Storage is given over here
and it is supported, and encryption on Elastic
Block Storage is supported. And in terms of GPU, there is no GPU enabled
at this moment. And you can see the
pricing as well. So here, on demand Linux
pricing is 0.02 USD per. So say pricing is the same, RSSal pricing is a little high, and you can see the
Windows price is almost the same as the
Linux pricing as well. So let's just go
to the last page. I mean, let's go to
the last one and then compare the
last one over here. So now I'm on the
last page right now. So this particular machine is
$13 an hour. So let's see. Oh, there is $26
an hour as well. Okay, so let's see why it is costing $26 an hour over here. So first, let's go to
the details over here. This is X two IEDN metal. That's the family
name over here. Let's talk about the compute. So it has 64 cores, which means 182 virtual
CPUs and that is 182 CPU, meaning, two threads per core. You can see that it has
about 4,009, I mean, 4,096. GB of memory. It's about 4 terabytes of
memory you have over here, let's talk about networking. It is 100 gigabyte of network, which means really
fast and you have IP address per interface is
50 IP address per interface, and IPaddress version
six is also enabled. Let's talk about
storage over here. EBS is supported. It comes with 3,800
gigabytes of storage already and it includes SSD
as part of the storage. You can see that
there are two disk is divided for this 3,800 storage. There's no accelerator. There is no GPU over here and the pricing is $26 for b two, and then you have option four so sell NX and the
normal Linux flavors. So it's expensive on Windows, $32 per hour for Windows
and RHEL is about $27. Likewise, this is one of
the really high value, you know, version machine. But I guess there is even higher value
version machine as well, which has this GPU in it. So I'm just checking for
something with a GPU. Let's talk about some
GPU oriented missions. GPU is equal to a 16. Okay, so this g6e, 24 large. Now, this has definitely
has GPU over here. I've clicked on it, so I've comm inside this
particular machine. So now if you just go
down to accelerator you can see that it has NVDSGPU which is about 176 GB of memory GPU memory for
graphical processing. And you can see pretty much it is almost the same
as the previous one. It has about 768
memory on the system. And you can see this
is $15 an hour. But I guess you have options
for GPU equal to 16 as well. GPUs equal to 16. Yeah. So now, this is going
to be much expensive, I believe, and this is about 15. Yeah, it is about $17
windows pricing over here. But the thing is that it
has 16 GPs over here, about 12 GB ram of
your GPU memory. Likewise. So you have various different
configurations on the instance type over here. So if you want to know
about the instance type, the best place is to come over here and understand about it. You can also sort it by
what is your requirement. So if my storage is
the requirement, I can type storage over here. Then I can put a equal to and what size I'm looking for in
terms of storage. I want 60 GB of storage. Now you have Mon small as well, which has 160 GB of
storage available in HHD and it's costing around 0.075 an R. Or else
you have MT two X large, which has SSD rather than HDD and it cost
about $1 per hour. But then again, the pricing
on the savings plan and reserved instance will be
much lesser because you will be actually paying for
yearly pricing range. Which means that you're
going to get some discount or Thank you again for
watching the SweDEE. I see you
74. Labs Types Of LB In AWS Hand's On: Hey, guys, welcome back
to the next video. In this video, we are going
to talk about low balancer. Now, to understand low
balancer and towards hands on, you can open EC two, and when you open EC two, you will have the
low balancers over here on the EC two like this. Then you can click on
Create Low Balancer. Now you have three
options over here. Actually, there are
four options over here and towards creating
a low balancer. Now, a classic low balancer which was in the
previous generation is kind of depreciated and
it will be leaving soon. And classic road balancer is replaced with three other load balancer types over here. So the first one to start with is an application load balancer, and then you have Network load balancer and gateway
load balancer. What is an application
load balancer? Let's just talk more in detail
about each one of these and then just compare
it with one another. So firstly, application
load balancer is majorly used for your
web applications. That's the name of application load balancer came into picture. So it is used for
web applications, microservices, or anything which uses application
framework. In terms of network
group balancer, it is used on a low latency
high throughput applications where you need a
very good response from your applications. So there we would be using this. The gateway load balancer
on the other hand is used for security appliances, traffic inspectors and any
kind of application which uses a third party
application and wants to balance the
load between them. So in case of IO or
something like that. That's basically a
gateway load balancer. So we will do a sample or demo of one of the
load balancer just show you around how to create this load balancer configuration and how to configure this. Preferably, we will do
application load balancing and set up one application
on it on the next video. But here in this video, we will try to understand
the differences in them. So firstly, the application load balancer is layer seven load balancer
for application, and network load
balancer is layer four, and gateway load
balancer is layer three. So each layer contributes a little bit of
differences between them. A layer seven load balancer is basically used for path based
and host baased routing. So for example, a
traditional load balancer looks like a network
load balancer, which is a layer road balancer. So what a layer for
load balancer would do is when a request comes
to the road balancer, so the load balancer
will redirect the request directly to
something which is beyond it. So you will normally
say that these are the service of services
beyond the load balancer. And all the road
balancer will do is it will be a bottle neck. I will create a bottleneck for the incoming
request and then routes the request to different
applications beneath. Okay. So basically what you see in this architecture
is what will happen. So the request comes
in and then it goes through the network
load balancer, and then you can configure
directly items under it. There's one option
of configuring directly the EC two
computer under it, or you can go
through another ALB, or application load balancer. And then from there, it can
connect to multiple items. So either ways you can do it. But do remember
that load balancer is a very straightforward load balancer where it basically Sensor when a
request is received, it balances the load and sends out to whoever it
is reporting to. It does not classify
the request. Now, classification of request comes through application
load balancing. Application load balancing
not just balances the load, but also it classifies the load. So as I told you, in
terms of routing, here, the network load balancer
will do an IP based routing, very simple and straightforward. Okay? Application load balancing will do a path based
and host based routing. Now what is path
based on host baase? The URL which the customer is sending to that
load balancer, it will analyze the URL, and there will be
two options for it. Path base or host based. Path base means it will
look at the URL path, and based on the path, you can actually configure
the application load balancer to point to
a custom directory. So in that way, that
custom directory would be that directory or
custom application would be called upon
by looking at the URL. So a lot of sorting happens at the application load
balancing level rather than the
network load balancer. Network load balancer
will directly send the request without having
looking at the URL. But application load
balancer looks at the URL, see what path they are requesting
and then takes a call. So there is a
judgment call happens over here at the
application load balancer, which doesn't happen on
the network load balancer. So this is the major, major difference between
application load balancer and network load balancer. So application balancer not
just forwards the request, but also analyzes the URL, and then it basically decides based on path based on
host based routing, where in terms of
network lo Banste basically does a IP
routing, IP based routing. In terms of the gateway
in this aspect, the gateway balancer, um, you know, transparent
routing through appliances, it does a transparent routing
through appliance and it basically understands
the firewalls and the third party
applications beyond that. And basically, it does
the, you know, um, host routing of your, uh, you know, URS incoming
application traffic. Now, on the other hand, the ALB or application
balancer will handle HTTP, HTTPS web sockets. Network Load Balancer
handles TCP, UDP, TLS routings and
Gateway load balancer handles all IP address traffic. This is the major difference
between all these of them. So as I told you, if you're using micro services,
containers, application, web services, all sort of things will
be supported by ALB. But if you want
something which is ultra low latency and which can handle a
lot of network load, then you can go for
Network load balancing, even for your microservices
and containers. But do remember that this
network load balancer will be using a very
straightforward routing. That is the reason it is ultra low latency
because it does not, it does not sort
out or it does not, has a deciding factor or
the load balancer level. You also have a option of configuring network load
balancer under that, you can configure a ALB as well for it to sort
out more in detail. That we will talk
later point of time, but these are the
options which is available while creating
a load balancer. In the next video,
we will see in much detail about
the load balancer. Thank you again. I'll see
you on the next video.