Transcripts
1. AWS Accounts: Hello students, welcome
to another video. Before we move to AWS and
start exploring its services, the very first thing that
we need is an AWS account. But before you create
your AWS account, there are a few things that you need to know. In this video. We will be talking
about the elements of an AWS account and the things we need to create an
account for ourselves. Let's get right into it. This is an AWS account. I am assuming that you don't
have an account already. So we are going to create
one in the next lesson. And AWS account is a
container in which you create and manage all your
AWS resources and services. The first thing you need
to create and manage an AWS account is an identity. We will be covering
identity in the IAM module. But for now, you can
understand it as a user. This user will control and
manage their entire account. Then you will be creating
an account for yourself. You will act as an identity. Now as a user, you will create
some resources in your account that will
eventually generate some bills. You can manage all types of billing in the same
account as well. Hence, each account has its AWS resources,
identity, and building. An AWS account is a naturally
secure environment. No one outside your account can access your resources and
you will get the bills. Only for the resources you
provision in your account. You can start your AWS journey
with a single account. But when your organisation
grows in size and complexity, it might be difficult to manage everything on
a single account. That's why AWS recommends
using multiple accounts. For instance, you can create separate accounts
for development, test, and production teams
in your organization. Because it simplifies
the security control and billing process
for each team and provides different teams with the required flexibility so
they can focus on innovation. Now, let's look at what you need to create
an AWS account. First thing, you
need an email id, which must be unique. That means it should not be used in any other AWS account. You also need a credit
card or debit card, which is required for
billing purposes. With these two pieces
of information, you will be able to
create an AWS account. When you create a
new AWS account, you are basically creating
a user that has control over all the services and
resources in your account. This identity or user is known as the AWS account,
the root user. The root user can sign in
with the email address and password that you used when you created the account
in the first place. We will talk about root user in detail in the AWS
security module. When you login to
your AWS account, the first dashboard
that you see is known as AWS management console. It's a web-based console which you can access
through a web browser. There are other ways to
access AWS accounts as well, which we will see
in the next module. You will also be
asked to authenticate your identity when you
login for the first time. That's basically the identity
and access management, commonly known as IAM service, ensures secure access to your AWS account and
the resources in them. We will also explore in
detail in a later module. So now we are all set to
create an AWS account.
2. Create AWS Account: Hello students, welcome back. In this video, we will create
an AWS Free Tier account. The purpose of the AWS Free, tied to let customers and new users explore
AWS and gain that required hands-on
experience before they actually start building
their Cloud infrastructure. So as the name suggests, the free time consists of
services and resources that are free to use in the AWS with some
terms and conditions. We have discussed in
the previous video. You need a unique email id and a credit card to create
your AWS account. You will also need a form to receive an SMS
verification code, which is a part of
opening the account. Now you might be
thinking that if you are going to use only
free CAS services, why do you need to input
your card details? Well, that's because the free
tire is a limited offering. And in case you
exhaust the limit or user service that does not
come under the free tire. We have a pretty
color the amount. Therefore, it's
mandatory to set up our billing method like
creating your AWS account. But that's not something
to worry about. In this training. We will only be
using dire services. And if you stick to what
we do in this training, you won't get charged
to further ensure that. We will also set a budget
for our account and configure an alert so that
we get a notification. If we cross the CIP project. Let's open a web browser and start creating
our free account. I have opened a
web browser here, and I did such for
AWS free tier. And the very first result here, AWS free time is what we want. The first page helps
you understand what a free account is and what are the services that
come under the free diet? I will be covering a dumb
retire in the next lesson. So let's skip them and
create a free account. You will then need to fill
in the required details. Your unique e-mail address
for the root user, and then choose a name
for your account. You can change the name
and LES letter as well. If you want. Then click Verify, E-mail address,
check your email. Enter the verification code
here and click Verify. You will be greeted. It's you, your email address
has been fully verified. Now let's enter a password for the root user and
confirm the password. And click Continue. You have finished step one. And then it will take us to step through the first question. How do you plan to use AWS? Just choose Poznan. Then add your name, phone number, your country
or region, address. And then click on accept the terms and conditions and
continue to the next step. Next is the billing page that you need to fill in
your billing information. Enter your credit card
number, expiration date, cardholders name, and formula, upside risk, and then
continue to step four. You may be asked to
confirm your identity. So again, you will need to enter your phone number and
to the security check. And you will get a quote on your phone that you
need to enter here. I haven't had my
verification code here. And let's click on Continue. Here we have the support plan. It is a way to get
real time help from AWS related to your
resources and services. For our purpose, we will
choose the basic support. It is completely
free and it only helps you with the account
or billing related issues. Finally, click Complete sign up. And that's it. We can now go to the
AWS management console. Click on the button in
the middle of the screen. Here, it will ask us for an
account ID or account alias. We have not specified
any alias for account. And don't know what
the account ID is. We also don't have
a username because this is an IAM username. What we do have is
the root user email. So let's choose to
sign in using routing, an e-mail and enter the email address we
just signed up with. Enter the password
and click on Sign-in. Congrats. Vietnam logged into our account. And this is the
Management Console. So now it's your time to create an account for yourself to
go ahead with this training. That's all for this lesson. We will understand AWS Free gyre offerings
in the next session. See you soon.
3. AWS free tier: Welcome back students. In the last video, you have created
a new AWS account and added your card
details for billing. This reminds us
that we need to use AWS services in a
planned manner. Otherwise, you will end up
paying for some services. Well, don't worry. In this session, I will
walk you through how to use your AWS account so
that you won't get charged the next
couple of sessions. I will also help you
secure your account, set a budget for your
account, and an alarm. So you get notified if you
cross the budget limit. So let's get started
with AWS free tire. The free tire lets you
try certain AWS services for a definite period without
having to pay anything. It means AWS provides a list
of AWS services that you can use for free on the AWS platform to gain
some hands-on experience. Now, let me take you to the
AWS free tier web page. This is the AWS Free Tier page. You can see all the details
about AWS free tire, which is classified
into three categories. The first one is free trials. Under this offering, you can use some AWS services
for a short period. And then these services, and at the end of that period, the second one is
12 months free. It means you have 12 months to try out these services for free. And it is starts
from the date you created your AWS account. The last one is always free. These services are
available to you forever, but there is a
limit as to eat in what amount you can
use these services. So they never expire until you
cross the available limit. Next, we have an
option here to filter AWS products based on different categories
and free trial. Simple enough, right? Let's try it out to
understand better. Let's select Always free. Here you can see only these services which are always free within
a certain limit. Now, take AWS Lambda. It is a serverless
compute option, which we will see
in a later module. But here you can
also filter it from the product category.
As of today. It allows for 1 million
invoked patients per month. That means if you stay under
1 million invocations, it will always be free. This free time never expires. Now select 12 month free. In this category,
Let's see, storage. S3 for instance. It is an Object Storage
Service which is free for 12 months or up to five
gigabytes of storage. After that, you will have
to pay a certain amount. S3 is a great option for
trying out a static website. We will see that in the storage
model of this training. The last example is Amazon EC2, which you use to deploy
your application. Aws offers a twelv month
trial of up to 750 hours of usage every month for
the T2 micro instance type, it means you can run your
application free for 12 months. This is a significant
period to master AWS. You can try out and learn
almost all the core services. And I hope you make
the best use of it. That's all for this lesson. I would recommend checking
this page before. And while using AWS services, there are so many services
you can try for free, but makes sure you know about the free availability of
service before you use it. That's all for this video.
4. Securing an AWS Account : Hello students. In this video, we are going to understand
how you can secure your AWS account with password policy and
multi-factor authentication, or MFA in short. We will also look at access and secret key later
in this video. So let's get started with
multi-factor authentication. That is, MFA. You have an e-mail ID and password to login into
your AWS account. And once you login, you can access the AWS
management console and perform different
operations into it. Let's assume someone has
stolen your password. Now this person can login and do anything in
your account, right? Especially if he has
administrator access, he can delete
production resources, transfer confidential
data, change the application
configuration, and whatever. So how do you protect your account on
top of the password? Well, you can use a multi-factor
authentication or MFA. What is multi-factor
authentication? It is a method of
authentication that requires any user to provide at least
two verification types, which are known as factors to get access to an
account or resource such as a website or a mobile
phone app, or services. Apart from username
and password, users will have to enter a
onetime password or OTP, delivered via SMS, e-mail, or an authenticator app such as Google and Microsoft
Authenticator. A good example of two-factor authentication
is a Gmail login from a new device. Every time someone tries to
login to your Gmail account, it asks for the password and a system generated onetime
password, that is OTP. It's sent on the
registered mobile number. Multi-factor authentication
is used in the combination of a password and a security device you own, like your phone. Once you configure MFA
on your AWS account, you will have to provide an MFA token every time you log
into your AWS account. Mfa basically adds another layer of security over your password. So even if someone has the
password to your account, they still won't be able
to log into it As they need MFA as well to do what? Hence, it is a good practice to configure MFA into
your AWS account. Mfa and security are important topics
from both interview and exam points of view. You could expect questions like, how do you add an extra
layer of security to your AWS Route or IAM user? The answer to this question will be multi-factor
authentication. Apart from the
management console, we can also access AWS
through CLI and the SDK. We will understand this in the next topic,
interacting with AWS. But when you access AWS through
any one of these methods, you use an access key
and a secret key. These keys look like the
codes present on the screen, which might not be a user-friendly as a
username and password. But you will have to use them in your code to login
to your AWS account, either through CLI or SDK. These are known as access keys, and they are used for
programmatic access. We will create some
access keys later in the course and see how
use them to login. So now let's move on
to the next topic, which is password policy. Password policy is
a way to enforce a strong password for
your AWS account uses, because your account is more secure when you have
a strong password. This we can achieve
via password policy. All you need to do is enable
all the rules that you want your users to follow while creating or
resetting their password. And that's all. In the password policy. You can set a minimum
password length rule. Then you can enforce
specific character types. For example, you
may want to have at least one uppercase letter, one lower case letter, and a number as well. Other than that, you can also specify non alphanumeric
characters like percentage, Street, etc, should be included. Then you can enable
password expiration, which means the user needs to rotate their password
after certain days. For example, after 90
days, 120 days, etc. Next, you can allow the user
to change their password, or you can also opt for only the administrator
to change the password. Finally, you can also prevent password reuse so that users, when they change
their passwords, don't change it to
the one they had already used in the past. So this is great. A password policy is really helpful against
all my colleague. Attempt on your account. Password policy is an
important topic under AWS security from the
certification exam or interview point of view. That's all I wanted to
cover in this video.
5. Configure MFA & Password Policy : Hello students. I hope you have created
your AWS account. So in this video, we will secure our
accounts by adding multi-factor
authentication and setting up a password policy. Before we start
the configuration, let's quickly
understand what is MFA. It is an authentication
method which validates the user's identity. So when any user tries to access something which is
secured with MFA, need to provide
two or more pieces of evidence to gain access. So let's go into our AWS
account and secure it with MFA. For that, we need to go to the Identity and Access
Management Service. So here we are in
the IAM dashboard. And as you can see, we have already got a couple of security recommendations
from AWS. The first thing it's
saying is that you should add an MFA
for the root user. So let's add select my account, then click on
security credentials. Here you can see the
account details. Let's scroll down a little bit. Here. You can see that we don't
have any assigned to devices. So let's click on
Manage MFA device. We are going to use a
virtual MFA device here. You will have to install an authenticator app on your mobile device
or your computer. I used to use the Google
Authenticator app on my phone because that's
completely free to use. So I will use that. But if you don't already have one installed on your phone, install the Google
Authenticator app, and then click on Continue. The next thing to do is
click on Show QR code. Once you see the
QR on your screen, just scan it using your phone. And it should open the
Google Authenticator App. If you do it correctly. It will generate a code and
display that on your screen. So what you need to do
is add that code here. Just how I have added mine. And then you need to wait
for this code to expire. Because it will
generate a new code. You will have to enter the new code into the
MFA code toolbox. Once you do that, click on Assign MFA. You have now got MFA
enabled for your account. And you can see that
we have an ARN or Amazon resource name for
our virtual device here. So it's quite easy to enable
MFA into your account. Now, let's go and configure
a password policy as well. Let's go to the IAM dashboard. Again. You will find the password setting in
the account settings. The left-hand side,
we will click on Change password policy and enforce a policy on our account. You can change a lot of
things about the password. For example, the minimum
password length. Let's change it to 16. Does the user required
uppercase, lowercase, or number? You can select all
or any of them. I will select all like. This is a good practice to use both uppercase and lowercase
letters in the password. And I recommend you do the same. You can pretty much customize your password
policy as you wish, and then click on save changes, which will do the job. So now, every new user in
your account must follow this password policy while creating or resetting
their password, which will improve
your account security. So students, these are the few initial
security measures that every cloud
engineer should know. Cloud security in general
is a very important topic, both from their
certification exam and interviewed point of view. And I hope you understand
it thoroughly. That's all for this video.
6. Set up Budget & Alert: Hello students, welcome back. You now have our own free
trial account and you have secured it with multi-factor authentication
and password policy. You also understand AWS, free tire and the different
offerings under rate. In this training, we will
only be using free to add services for hands-on
practice in AWS. So if you stick to
these free offerings, only, you will not be charged. But just in case you accidentally
use a paid service or forget to delete a service that could be paid
after a certain time. It is essential to set
a budget and a billing. A lot that you get notification via email or SMS when you reach
the certain limit. In this video, we are
going to configure the budget and alert
for our AWS account. So let's go to the AWS
management console. As you can see, I have logged into my
account with root user. And up in the top right
hand corner here, underneath my account name, we will select
billing dashboard. This will take us
to the dashboard. Let's scroll down to the cost management section
and click on budgets. Here we have some
additional information about budget and alert, like how it works,
benefits and features. Click on Getting Started if
you want to read it further. I highly recommend that you do that as it will give
you further insights. Now, we will scroll back and
click on create a budget. Here. The cost budget recommended
is selected by default. So we will click on Next. Leave everything default. And here, enter your
budgeted amount. We will enter our
budget. For this demo. I will select $3. Scroll down and enter
the name of your budget. Naming your budget
is important as you may have multiple
budgets in the future. Let's name it $3 alert. So it will be easy to
identify it latter, and click on Next. Now, click Add alert threshold. Here you can create
multiple thresholds. So if we enter 50
in the threshold, that means you will get
a notification when your AWS spending reaches
50% of $3, which is $1.5. In this case. You can add
multiple threshold also. Let's add one more
threshold so that we get one more notification before the amount reaches
the set limit. We will add 80% this time. Here. In the notification option. We have a couple of options as to how we want to get notified. Let's opt for email
notification. Enter the email address on which we wish to
receive the e-mail. And then click Next. Leave everything as default in the attach action
page, and click Next. Now let's review everything and then click on Create budget. It will take a few
seconds and we will see a message saying the budget
is created successfully. Here, you can see that the
threshold status is okay. You can use this
budget link to see the details and graph
of our spending. Currently, we don't have
any details here as it takes around 24 hours
to populate the data. So when you create a budget, I would suggest that
you come back here after a day or two
and see these DJs. Let's scroll back up. And on the top corner from here, you can edit and delete
the budget as well. This is very important
as you might need to adjust the
budget in the future. Go ahead and try it
out. For this video. That's pretty much
about the budget. We will get the first alert when our AWS bill reaches 50% of $3. And the second alert at 80%. As per the alert, we can come back and check
what services are running. And if we don't need them, we can terminate them
to save the cost. In the next video, we will summarize the topic understanding AWS
account and free tire. See you in the next one.
7. Methods of Interacting with AWS: Hello students, welcome back. Earlier we discussed the different Cloud
deployment models, which IS Pass and says, But did you know
that we can also use AWS services as any of
these models if you don't. Well, no worries. Let's understand
that in this video. Here, we are going to
understand what are the different methods
to interact with your AWS account and services. So far, we have been interacting with AWS through
management console, which is a web interface. In addition to that, we have two more ways
to interact with AWS. First is the AWS CLI, or command line interface. It is a tool that you install and configure
on your computer, which allows you to access the AWS services and automate them through your computer
terminally scripts. The second method is SDK, which stands for Software
Development Kit. It is used by programmers
and developers to interact with AWS services directly
from the application code. If that doesn't make
complete sense, don't worry. We will understand
them in detail. Let's start with
the AWS console. Again. It is a web interface to
interact with AWS services. And since it can be accessed
through your web browser, it is the easiest way to
manage resources on AWS. Just login to your AWS account
and do whatever you want. What's more or mobile version of the AWS management console
is also available. You can install it
on your phone and access almost all AWS
services from your phone. Isn't that great? Now let's look at
some advantages of using AWS Management Console. If you are a beginner, AWS Management Console
is the right start. As you can very easily
interact with AWS services. It's like navigating
just another website. But as we discussed, it's easier because of its very clean, step-by-step
user interface. For all these reasons, the management console
is the best choice for performing administrative
tasks as a beginner. Now, white only for beginners. Well, that's because
some operations in the AWS console or
manual and take time. Hence, it's not possible to automate everything through
the management console. And that's why we use a
Command Line Interface or CLI. Now, let's understand this. The AWS Command Line Interface, or CLI, is a unified tool to
manage your AWS services. It is a command line
interface that allows you to manage your AWS service
using your computer terminal. This is a tool you can easily
install on your computer. It supports Windows, Mac, and Linux operating systems. You can do everything
that you can do with the AWS management
console example. If you want to list all
users in your account, then you will open a terminal in your computer and
type AWS list users. It will get you a list of all available users
in your account. And if you want
to delete a user, you could use the
command AWS delete user, username, and it will delete
that user from your account. You can also combine these two commands to execute
what tasks in one step. That's a significant
advantage of using AWS CLI over the
management console. You can automate tasks in CLI. Let's assume you have 50
users in your account. And you want to delete users
whose name starts with M. If you have to do this
from AWS management console, you will have to find each user and delete
one by one, right? But that's not the
case with CLI. Here, you can write
the script and use the above two commands to
perform the same task. And this will be done quickly. Similarly, you can manage multiple AWS services
using the same script. To access AWS through CLI, you need to
authenticate yourself, just like in AWS
management console. But here, instead of using
username and password, you will have to
provide access and secret key access key acts like username and secret
key is your password. If anything sounds confusing. We're going to
install and configure AWS CLI in the next lesson. Now, let's look at some
advantages of using AWS CLI. It is a great way
to interact with AWS through a
computer terminal and is an excellent way to automate tasks and achieve
infrastructure as a code. Finally, we have the SDK, stands for Software
Development Kit. Dave kid. In short, as the name suggests, it's a set of software tools and programs used by developers
to create applications. Now, if you are thinking that
it's similar to AWS CLI, That's not true
because you cannot use AWS SDK in your
computer terminal. This is only used to read or within your
application code, which allows your application to interact with AWS services. Sdk supports many different
programming languages, such as C plus plus, Java, javascript.net, Node.js,
PHP, Python, and Ruby. There are also mobile SDKs for Android and iOS have level. Let's understand this
with an example. Assume you are developing a mobile application that will allow users to
share their photos. For that, you will be using
Amazon S3 as the storage. We will talk about S3 in
detail in another module. But in this case, your application needs
to interact with AWS, Amazon S3 service, right? You cannot use AWS management
console or AWS CLI in this scenario because you want your application code to
interact with the AWS service. Therefore, you use AWS SDK. It will allow you to access and manage your AWS services
through your application code. You might already know the
advantages of using SDK. It is a perfect tool
to interact with AWS through application code. It is great for interacting with AWS within application code. So students, I hope you now understand all the three methods
of interacting with AWS.
8. Using the AWS Management Console: Hello students, welcome back. In the last lesson, we looked at different ways to interact with AWS services. In this lesson,
we will deep dive into the AWS
management console and understand what are
the different ways to login to the AWS
management console. How to create an account, LES, how to select an AWS
region in our account, and how we get to various AWS services
and their dashboard. So let's get right into it. First, you need to sign into your AWS account to get to
the AWS management console. And the credentials
that you use to sign in determines what type
of user you are. As of now. You know the root user who
is the owner of the account. This user has access to all the resources and
services within the account. Then there are other
types of users in your account
called IAM users. These users have limited access to resources
and services. Now let's understand
the difference between root user and IAM user. Root user is the user that was created when we first
created the account. So if we login using
root credentials, we will have access to
the entire account. On the other hand, IAM users are created by the root user. We will talk more about
them in the later modules. Login to AWS. As a root user, you need the email ID and
password that we used and multi-factor
authentication that we enabled when we created
the AWS account. However, login as an IAM user, you need a username, password, and either
a twelv digit, AWS, account ID
or account ideas. As we neither have an IAM
user nor the account. So let's login with the
root user that we have created already and
create an account LES. Here we are on the
AWS signing page. I will enter the e-mail
ID, then click Next. I will give my password. Then click on sign-in. As I have configured
multi-factor authentication, I will have to enter
a unique code in addition to my password
to be able to log in. So I will enter the code. Now, I am logged in
to the AWS console. Now that we are in the console, there are a few things
we should quickly check. One of them is account. Here you can see the
account settings which contains details of your account like
contact information, alternate contact for billing,
operations and security. Remember the set-up of budget? In the last topic, we will quickly check
our billing dashboard. Click on billing dashboard. You will see the
billing summary and other details including the
bill for the current month, previous builds,
and budget control. Although I believe you have set up a budget and
enabled an alert, still, I recommend
regularly checking this page to see if you
have any unwanted bills. Now, let's go back and create an account alias type IAM
in this search box here, and click on IAM identity and Access Management
Service dashboard. And as you can see on the
right-hand side here, we have an account ID and
below it is the account alias. Please note that
the account alias is the same as the account ID. So here is our 12th
digit account ID. But remembering it
might be a challenge. So we are going to create an alias that's much
easier to remember. For that, click on
Create, add your ID, the Cloud advisory 2050, and then save changes. That will create an ID for you. We will use this ID or alias every time we login
using the IAM user. Now, let's go back to
the user account link. To do that, just click
on the AWS logo. Next to this, we have another drop-down for
selecting the AWS region. So let's select a
region for our account. We are currently
in North Virginia. Us East, One is
the reason called. Each region has a
specific code and we use this code when using
AWS, CLI and SDK. We will talk about it in
the upcoming sessions. We have already discussed the
AWS global infrastructure. And you can see that there are many regions and
supported regions in AWS. We'll logging into AWS
management console, select an AWS region to create these services
in a specific region. This is the first thing you do when you create any new service, except a few which are
not Region sensitive. If we switch over and select
the Asia Pacific region, which is nothing but
the Mumbai region. Any services we now create will be launched in the
Mumbai region only. So it is crucial to
decide and select the region where you want to
create your AWS services. And importantly, if you create some AWS resources and you don't see those resources
in the console. It means you created it in different region and you will have to switch to their
region to see them. Now let's move forward and see how to navigate to AWS services. The AWS management
console provides multiple ways for navigating to individual
service dashboards. In the search box on
the navigation bar, enter the name of the service. You will see the results. Choose the service you want
from the search results list. For example, let's go
to the S3 service. We can simply type
debt in and we will get a list of all the
services that includes S3. In this case, we would be
looking here just for S3. And if we click on it, it will take us to the dashboard of that particular service. Next, we can also go back to the homepage by clicking
on the AWS logo. Next, choose services to open
a full list of services. Here on the upper
right of the page. Choose group to see the
services listed by category, or choose a jet to see
an alphabetical listing. Then select the
service that you want. For example, scroll
down and select the storage category that
includes AWS Backup, EFS, S3, and many more. I will talk more about those
in one of the later modules. But this gives us a
high-level way of looking at the services
based on their category. Next is the recently
visited tab. Here you can see the recent services I have
visited within the console, including S3, RDS, ec2. So these are the services
that I recently worked with. In case you only work with
a handful of services. This can be a very quick
way to simply click on these and go to the specific
services dashboards. That's all for this session.
9. Install AWS CLI: Welcome back students. We will continue
our discussion on how to interact
with AWS services. And in this demo video, we will install AWS
CLI on our system. Since I have a Mac, I will go with Mac
installation instructions. However, the process remains the same for other
operating systems, including Windows and Linux. So let's go to the browser and find the installation
documentation. In this search bar. I installed AWS CLI on Mac. You can also search for
your operating systems like installed AWS CLI online. Next, install AWS CLI
on Windows as well. We will open the fast lane, installing and updating
their test was an AWS CLI. This page has installer
shall instructions for all three operating systems, including Linux, Mac
OS, and Windows. I will go for Mac OS
and follow the process. Let's scroll down. And here is the
installation instruction. We need a dark EKG file, which is a graphical installer. So I will download
the file from here. Once you download it,
open it, and click on, continue, continue,
continue and agree. Give permission to install for all the users on this computer. Click on Continue and
click on, install it. We installed the
CLI on your system. Let's wait for
everything to be done. The system is writing the files. And finally, the installation
is now successful. Since I do not need
the installer anymore, I will move it to the trash. Verify the installation. I will open a terminal, go to the Search option, and type a minute. And then I will add AWS. If you have a running
CLI on your system, it should return a version
of the AWS execute table. Let's wait for a little while. Here we have got the
answer to our command, which shows that everything
has been installed correctly. Please download the CLI on your system as we will be
using it in future lessons. In case of any issues, please have a look at this guy.
10. Using the AWS CLI: Hello students, welcome back. We will continue
our discussion on how to interact
with AWS services. And in this demo, you
will learn how to use AWS CNI to interact
with AWS services. So let's get started. Let me open the
terminal on my laptop. First of all, make sure you have already installed AWS CLI. You can check by typing. Aws wasn't on your terminal and easily give
back the AWS CLI. Second, you need to authenticate AWS CLI to access AWS services. And as we discussed, AWS CLI users access
keys for authentication. We will cover authentication
and authorization in detail in the identity and
access management topic. But for this demo, let's go to the AWS
management console and create an access key. Here. Click on Account, then
security credentials. We are now in my security
credentials access key. And here we can create
new access keys. Access keys are critical because they are used to
control authentication and authorization when using the CLI and the SDK to
access the AWS account. Now, as a note, we will be creating
access keys here. However, I am going to delete these keys immediately
after this table. You should also not use root user access keys
on a day-to-day basis. This is not a good practice. Again, we will learn this in detail in the identity and
access management topic. For now, let's click on Create new access key and
then show Access Key. Here we have both our axis
and secret access key. This is what we will
be using to access our AWS resources
through the CLI. Let's copy both the
keys in a text file. You can download it as well by clicking Download and allow. Now let's go back to AWS CLI
and type in the terminal. Aws configure. Now first entered the access
key and secret access key. Now enter the
default region name. And in this case, let's
choose the API salt one. At last, we will choose
the default output format. In this case, let's choose
the gestural format. Once we have all of that
in place, hit Enter. We have successfully configured the AWS CLI means we
should be able to use the AWS CLI to get information about
the AWS resources. Let's quickly check it. We will run a command that
will list the S3 bucket. Let's type AWS. And it will return
a list of all of the different S3 bucket that I have created
within this account. If you don't have any bucket, it will not return anything. So to quickly review what we
have done in this lesson, we created access keys for
our users with a caution that we don't generally want to create access keys
for the root user. Still be created
it for this demo. But as long as you delete it, that should be fine. And finally, we access the S3 to verify that we could
access the AWS resources. That's all for this video.
11. AWS Regions and Availability Zones: Hello students, welcome back. In this video, we will learn
about the AWS regions, different availability
zones, and the importance of regions
and availability joules. So let's get started. Suppose you want to host
our web application. For that, you would need
your own data-center right? Now that you know about AWS
and other Cloud services, you don't need to set
up our datacenter and can easily host your
application in the AWS Cloud. But let's assume you are still using an on-premise datacenter. Now here is a question. What if a disaster
hits your datacenter? That's a valid cushion, right? Anything can happen. And since you own and
manage the data center, you will have to figure out what you will do in such
an extreme situation. One way to avoid a
complete shutdown of your application is by running a second datacenter at
some other location. So when one goes down, the other will be
there as a backup. But maintaining
another datacenter will double the hardware
and maintenance costs. Don't worry. That
was just an example. An AWS has a better solution
to this possibility, which is AWS region. An AWS region is nothing but the geographic
locations worldwide, we are different
datacenters are clustered. That means AWS maintains multiple datacenters
in one place. According to AWS standards, there must be at least two
datacenters in one region. And each datacenter should
have redundant electricity, cooling, heating, networking,
and connectivity. Once again, all of
these regions are connected through the
AWS backbone network, which is basically a
very high speed network. Each of these regions has
different names and codes. These names are based on their worldwide
geographical location. Geographic location means physical location
like Mumbai, Tokyo, Frankfurt, North
America, South America, China, South Africa,
and the Middle East. These are some examples. Well, my AP South one. Moon Bay is a physical
location, right? So they call it one way region and it has
called episode one. Same goes for Singapore
and other locations. Singapore, API south
east, one, Sydney, API south east to
Cape Town, south one. So when you have an AWS account, you can choose in
which reason you want to deploy
your AWS services, whether you want to launch
it in Mumbai or in Tokyo. This also depends
different factors that I will talk about going
forward in this video. So in simple words, you can say AWS is a Cloud service provider and it has data centers
around the world. As of now, there are 26 geographic regions with 84 availability zones
across the world. And AWS keeps
adding new regions. So by the time you are
watching this session, these numbers may have changed. So you can look at
the link provided in the resources section under this video and check the
latest number of AWS. Let's move forward and talk
about availability Jones. Now, what our
availability Jones, it is one or more individually separated and
distinct datacenters with redundant power, networking and connectivity
in an AWS region. In other words, it is nothing
but an isolated datacenter. Within reason, the only
difference is that it has redundant power,
networking, and connectivity. So each availability
zone has a set of datacenters isolated from
other availability zones. So I believe we did. Joan is nothing but a
group of datacenters. And as per AWS Standard, an AWS region must have
two availability zones. So take an example of
the Moon Bay region. It has three availability zones. As we have seen earlier. Every region had
some specific code. Similarly, the availability
Jones also have corresponding code
which consists of a region code followed
by the alphabet. If you remember, the Mumbai
region code is ap salt one. Now let us see. The code for the
availability zone, availability John one has
the code API South one. And similarly, availability
John code two is a piece out1 and the availability zone third
has the code episode one. See, I hope it's clear
that region code followed by alphabet makes
the availability June cold. So by far, we know every AWS region has two or
more availability Jones, just like Mumbai has
three availability zones. North Virginia has Avenue
de Jones and so on. Now let us understand why these availability zones
are important to us. So we know that one region consists of multiple
availability zones. Here. Let me tell you that every availability June is
not a single datacenter, but consists of clusters
of data centers. Some availability Jones
can have two datacenters, while others can
have three or more. All these availability
zones are located somewhere between 5200
kilometers apart. Coming back to the importance
of availability Jones, suppose if one availability
zone goes down, the entire cluster of data
centers would go down there. But datacenters in other availability zones
will continue working. And hence, you will not face a complete shutdown of your
application or website. Aws infrastructure
is designed to ensure the high availability
of the applications. And if you think, how does it matter to
you, let me tell you. Suppose you want to deploy your application in
the Mumbai region. And as you know,
a Moon Bay region has three different
availability zones. So the first thing you
would do is launch a server in any of the Jones
as per your requirement. Once the server is launched, you can now access it
over the Internet. You do not need
to worry about in which availability zone you
have launched your server, as long as you can access
it over the Internet. But suppose that
availability zone in which you launched your server
goes down completely. That means application
will also be done. But that's not it. You won't be able to access your application data as well. If you don't have that data, you won't be able to
revive your application. So to avoid such situations, AWS has availability Jones. Let's take a look at the advantages of
availability Jones. The first benefit is the
higher availability. When you design
your applications, you make sure that you have put all your servers and databases in multiple
availability zones. So if one goes down, you have another zone
to cover for a second. If you remember, we talked
about the distance between two zones being somewhere
between 5200 kilometer. Now suppose you have a
primary database in one John. In that case, you can have the replica of this database
in the secondary Jones. It is also possible to have synchronous replication
between two databases. This will copy all your data in both the databases
simultaneously. Now, if one John goes down, we have another one
to operate normally, which is called fault-tolerance. Please note, this is not
a default design by AWS. You will have to architect
your application in multiple availability zones to achieve high availability
and fault tolerance. Students, I hope the
availability zone is clear. Now, let us now have a look at a sample architecture
in which we will see how do we use availability Jones to make our application
highly available. Suppose in an AWS region, we have two availability
zones that we will call availability zone one
and availability zone two. The first thing we
would do is launch an EC2 server and the corresponding database in
the availability zone one. Similarly, we would launch another web server and corresponding database in
the availability zone. Now your application and
database both are highly available and they can sustain
if one John goes down. Now students, in a nutshell, the AWS region is the
geographic location across the globe and the availability Joan is the cluster
of data centers. Particular region.
Alright, so now let's move forward and understand why is there a need for so
many regions today. Of course, it gives you
the flexibility to deploy your application
in any AWS region that is beneficial to you. Other than that,
the main reason is the low latency access
to applications. If your user base is in the US, deploying your application
in the US only make sense. Similarly, if you have
your user base in India, you will choose the
Moon Bay region. The second reason is
regulatory compliance. Different countries
have different rules for regulating the data. Therefore, having
multiple region gives you liberty to choose
what works best for you. The third reason is to sustain
any types of disaster. All these AWS regions
are across the globe and the minimum distance between any two regions should
be 300 kilometer, which we call as
disaster recovery site. This is to make sure that your application runs
in one of the sites. And the other side should
be at least 300 kilometers apart to ensure that they are not a part
of the same systems. Hence, you can also leverage the benefit
of multiple regions. Apart from the
latency requirements, regulatory compliance,
and disaster recovery, there are some additional
considerations that you can also look at while
using multiple regions. For example, for the
global application, organizations want to provide the best user experience to
their users across the globe. One way to achieve
that is by deploying your application in the region
closer to your audiences. For example, US based
application will use Mumbai region to provide the Indian user a better experience. Cost is another benefit of
using multiple regions. Since every country has
a different economy, the pricing of using AWS would vary from
country to country. Therefore, you will get cheaper services
in some countries. And as our organization, That's a great way
to save money. Next, you have a
reduced blast radius. Many people and
organizations want that in case a complete
region goes down, they may want to leverage another region as
disaster recovery. Many times, they also
distribute their workloads across the regions in case something happens
to one region. Another region is
active and that's how they reduce the blast
radius of the disaster. So students, that was all about the AWS regions and
availability Jones makes sure that you
understand everything well as they are important
from Interview point of view. That's all for this video.
12. AWS Edge Locations: Hello students, welcome back. In the last video, we understood the AWS regions
and availability zones. In this video, we will discuss another important aspect of
Amazon global infrastructure, which is edge location. Let's start with our
understanding of the AWS region and how do you choose a region
for your application? There are many factors behind
choosing an AWS region. The primary criteria is the closeness to your
target customers. We always choose
a region which is nearest to our target customers. But what if you have
customers worldwide? Just assume you have a video
streaming application on which users can publish the time-lapse of the
Indian landscape. You have holstered
your application and its data in Mumbai reason, since your customers
are from India. But after a few months, who saw a good amount of
traffic coming from Japan? As of now, you are serving all Japanese customer request from the Moon Bay region only. But due to the large distance
between India and Japan, your Japanese users may not have a good user experience
because of low latency issue. Now you can properly
rollout your application in Japan by copying your data
in a Cloud region in Japan, Tokyo in case of AWS. And that's how you can
serve your customers there. This was an example of caching. By definition, gushing is
the process of storing a copy of data in our temporary or cache
storage location. So it can be accessed
more quickly. So in this case, you can cache your data in
the Tokyo datacenter and give us smooth user experience
to Japanese customers. Another concept that you need to understand is the content
delivery network, commonly known as cdn. It is nothing but a
technical term we use for caching copy of data in a datacenter closer
to the customers. Please note, CDN is not
an AWS specific term. A Content Delivery
Network or CDN, cache content such
as images, videos, or webpages in proxy
servers that are located closer to end
users than origin servers. Again, these proxy servers
live in datacenter, and these datacenters are
called edge locations. Now you must be thinking, what is the difference
between region and edge location as both our
datacenters, right? Foster difference is
edge locations are smaller datacenters and they are available across all big
cities in the world. If you compare the
total number of edge locations is more than
12 times of number of region. So you have more
options to cache your data and solve it
quickly to your customers. Second difference is edge
locations are separate from regions and are
located at other locations. This allows you to
push your content from regions to a collection of edge locations
around the world. Difference is AWS edge
location host I spatial service to deliver this content faster is called
Amazon CloudFront. Whereas AWS region hosts
almost all AWS services. Other than CloudFront. Aws edge locations also run
our domain name service, or DNS, known as
Amazon Route 53. We will cover Cloudfront and
Route 53 in later modules. Alright, there is so
much more than this about AWS global infrastructure. But let's keep it
simple and take a quick look at the key points about AWS global infrastructure. Number one, regions are geographically isolated
locations where you can access AWS services required
to run your application. Number two, regions contain availability zones that
are physically separated. Buildings with their power
heating, cooling, and network. Availability Jones
help you solve high availability and
disaster recovery scenarios without any
additional effort. Number three, AWS, edge locations run
Amazon CloudFront to help get content closer to your customers no matter
where they are in the wild. That's all for this video.
13. What is Identity & Access Management?: Hello, students. Welcome back. Do you know why your
educational institutes give you the ID card? Because an institute's
ID cards give them control over who can
access their facilities. Your ID card acts as proof of you studying
in that college, and hence you can use
their facilities. Similarly, AWS has
a service that helps companies manage their
identity in the Cloud. In this video, we will learn the identity and access
management service concepts related to IAM, identification, authentication,
and authorization. Let's start with what is
identity and access management? Identity and access management is a framework that ensures that the right people in
your organization can access the only service
they need to do their jobs. Remember, it is not a
good practice to use your root account for everything
you and your team to do, as it could lead to some
serious security issues. Let's take an analogy here. Suppose you are working
in an organization, and every day when you
enter the main entrance, you have to show your ID card to the security guard or you need
to swipe your login card. In this way, your organization
ensures that you are the right person who can access the
organization's premises. And this is exactly
how IAM works. When you show your ID card, you prove that you
are the right person. You as a person is an identity. Now when you show your ID
card to the security guard, he looks at your
ID card photo and your face and verifies
whether both matches. Once verified
successfully, you get access through this
whole process. You as a person gets access to the right premises.
That's not it. This ID card serves
as proof to access a lot of other services and facilities in
your organization. For instance, some people can only use the
account section, while some can only use
human resources and so on. There is a well defined
system of who can use what in the same way. Identity and Access Management, or IAM, in short, is a service in AWS that makes sure that the right person has access to the right thing. Aws IAM provides access control across all of AWS
services and resources. You can specify who
can access which services and resources and
under which conditions. There is also the
concept of IAM policy, which you can use to
manage permissions to your workforce and system to ensure least
privilege permissions. If it is not making
sense, then don't worry. I am going to cover all these in the next
couple of topics. For now, let's understand three important concepts of identity and access management, identification, authorization,
and authentication. These three terms
are closely related, but they are not the same. Now let's understand them. Identification is the ability to identify a user uniquely. It means how a person or
system can be identified. In the last example, you identify yourself
with an ID card. Right? And that varies
from system to system. When you log into
your bank account, you use username and password. You identify yourself to the bank portal with your
username and password. Your identification is your username and
password identification basically is to identify every unique user who is accessing your services
or facilities. The second concept speaks
about authenticity. Authentication is the process of recognizing a user's identity. When you show your ID card
to the security guard, he verifies your ID card
photo with your face. That's authentication. When you login to
your bank account, you provide your
username and password. The banking backend system
matches your username and password against the
username and passwords stored in their system. The system makes sure you are given the right
username and password. If you know the correct
username and password, you are the owner
of the account. If you have been given
a unique identity, there has to be a
system to recognize it every time you use
your unique identity. That process of recognizing the authenticity
of your identity is known as authentication. Finally, authorization
is the process of giving someone permissions
to have access to something. Once you are authorized, you can enter the
office premises and can go to your office area. But you cannot go to the canteen kitchen because
you are not allowed or we can say you are not
authorized to access the canteen kitchen area
as per your job function. Now let's understand identity
and access management in respect of AWS. Aws identity and
Access management is an AWS service that helps an organization to
manage access of their AWS account and services for their
organization people. It provides fine
grained access control across all of AWS services. Fine grained means you
can specify who can access which services and resources and under
which conditions. If you want person to only access AWS EC two
service and person Y to access only
S three service, that's completely possible. This will be done
with IAM policy.
14. Users, Groups & Roles: Hello, students. Welcome back. In the last video, we
understood AWS IAM. In this lesson, we
are going to be talking about the
elements of IAM, which includes user
group and roles. Let's start with users. A user is a person who utilizes an AWS service in
an AWS account. As we discussed in
the previous lesson, AWS has two types of users, root user and IAM user. The root user name itself gives a clue that this is
the spatial user. It is created when the
AWS account was created. When you create an AWS account, AWS will create one
identity or user for you. That identity user is
called the AWS root user. This user has access to all AWS services and
resources in the account. Some important points
about AWS root users. You can sign in as
the root user using the e mail address and password that you used
to create the account. Root user has complete access to all AWS services and
resources in the account. Hence, it is not a good practice to use the root user
for everyday tasks. You should never, ever share root user credentials
with everyone. Only very few reliable people should have the
root credentials. Now we know that we cannot use root user for
our day to day job. That's where the IAM user
comes into the picture. Iam user is an entity
that you, root user, create in an AWS
account to represent the user who interacts with AWS resources in your account. Now let's understand
this with an example. Let's say you are
the team lead in an organization and you
have an AWS account. You have two developers, Ram Sham and two
administrators, Ta and Geta. In your team, your team needs
access to an AWS account to host an application because you don't want to give them
access to your root account. You can create IAM users
for your team members and provide them with the access to the services that they
need for their job. Here, each team member
represents one IAM user, as each team member needs to interact with an AWS
account to do their job. You will create four IAM users
for Ram Shamtha and Geta. Okay, now we understand
the IM user. Let's look at the way
IAM user works and how we assign permissions
to different IAM users. When you create a user, the user has a
username password, and access keys
that you share with the respective persons to
access the AWS account. Username and password
will be used to access AWS management console and access key for programmatic
access with AWS CLI, each user have been added
to a single AWS account. Each user has their
own credentials to access the AWS account. Now, the important question is, what kind of AWS
resources they can use? Well, any new IAM user does not have any default
permission to access your AWS resources to provide your IAM users with the necessary permission you need to give them the
required permissions. You can do it by adding IAM
policies to each IAM user. Iam policy defines the
permissions that are given to any user interacting
with an AWS account. We are going to cover IAM
policy in the next topic. In a nutshell, you
create IAM users, attach policies to
give permissions, and the user can use these credentials to
access these resources. An important point
here to note is that IAM user does not have to
represent an actual person. It could be an application. Also, let's say you have an application on frame which needs to access an AWS service. You can configure the
access key inside the application to
access AWS services. That is another important
use case of access key. Okay, let us now
understand IAM group. An IAM group is a collection of users and permissions
assigned to those users. Let's go back to
the same example. In your team, you have two developers Sham and two
administrators, Ta and Gier. Right? Both developers will be doing and accessing
the same AWS services. And the same goes for both
administrators as well. Instead of assigning permissions
to individual users, you can group these as well and assign permissions
to that group at once. In our case, you can
create two groups. A group for developers and
a group for administrators. Iam Group provides
an easy way to manage permissions for users according to their
job functions. When you create a user, you assign a policy to
the individual user. But in the case of a group, you add a policy
to the group and permission will apply to
all users in that group. Let's take another good
use case for IAM group. As you know, any user in
a specific user group automatically has
the permissions that are assigned
to the user group. If a new user joins your organization and needs
administrator privileges, you can assign the
appropriate permissions by just adding the new user to that administrator
user group. Similarly, if an
existing employee is moving to a new project, in a new role in the
same organization, instead of editing that
user's permission, you can remove him or her from the old user group and add him or her to the appropriate
new user groups. The IM group is a way to attach policies to multiple
users at one time. When you attach an identity
based policy to a user group, all of the users
in the user group receive the permission,
the user group. Now let's understand IAM role. An IAM role is an IAM
identity that you can create in your account
that has specific permissions. An IAM role is similar to
an IAM user attached with IAM policy that determines what a role can and
cannot do in AWS. Now you must be thinking, what is the difference
between IAM role and user? Right, let me explain to you. An IAM user is a unique
person who has username, password and access key with the permissions
attached to it. However, the IAM role does not have either a username
password or access key. The IAM role cannot be directly linked to a
person or a service. Instead, it can be assumed by a person or resource
for a definite session. What does it mean? Let me explain with this image example. Let's say you have
an application running on an EC two instance. And an EC two instance needs to access an
image from S three. In this case, EC two assumes the role and
based on policy, it will access
image from S three. As I mentioned, roles don't have access keys or credentials
associated with them. The credentials are temporary and anymically assigned by AWS. Aws gives the temporary
credentials to the role to complete the
task when it is needed. That's all for this video.
15. Creating an IAM User & Group: Help students. Welcome back. In this lesson, we will
create IAM users and groups. By the end of this lab, you will be able to
create these IAM entities and comfortably navigate
through AWS IAM console. Let's get started. Here we are at the AWS
console home page. Let's navigate to IAM by using the search option at the
top of the AWS console. As you can see, you can use the left navigation
bar to navigate in user groups, policies, roles, and identity provided
for the scope of this lab, we will only work with users
groups and their policies. Let's start by creating
two users, Ram and Sham. In our example, Ram is a developer and Sham is
an AWS administrator. You can navigate to
the user's page by clicking the user options
in the IAM dashboard. This is the user's
page where you will see all the users created
for your account. Now let's create
our first AWS user by clicking on the
Ad users button. We start by giving a
username to your user. Let us call this user Ram. When creating more than one
user in the same properties, you can simply add
another user name by clicking on this Add
another User button. However, for this example, we will do this one by one. The next option we see is
the AWS credential type. You should select Cess, Key Programmatic access to allow this user to
be able to access AWS CLI commands or use
AWS SDK and AW API. We do not need this
for our example. We will only select password AWS Management
Console access, which will give
user Ram to access the AWS via management
console selecting. This gives us some more options. You can have AWS create an auto generated
password or you can manually add a simple
password for this user. For our example, we will let AWS give us an auto
generated password. The next option is
require password reset. You should check
this box if you want your user to reset their password when they
first access the console. We will uncheck for this demo, Click on Next Permissions. Here we have three options. You can add users
to a user group. You can copy permissions from an existing user that you
already have in your account, or you can attach
policies directly. We have no users or user
groups in our account, we have no options. But if you click
the third option, attach existing
policies directly, you will see a whole
list of policies even if you have never created
a policy in your account. These are the AWS
managed policies. This means that these
policies are managed by AWS and that if any
changes is required, it will be made by AWS. These are highly useful for simple applications where
you don't want to create and maintain your own policies and let A's handle
all these for you. We can select a policy
here like Amazon S Three, Read Only Access, and give
this permission to our user. Let us first create a
user with no permissions. Click on Next Tags. Here we can add
tags to our users. Let us add an example
tag, Department Finance. Then click on Next, which will open up
the review page, where you will see the final
details about your user. Click on Create User. This takes us to the final
step of creating an AWS user. Here we can see that the
user Ram was created. You can even view the
password of this user. Click Download CSV,
which will download a CSV file with user
credentials to your computer. It is important to know
that you should save these credentials somewhere
after creating the user. If you may get away
from this page, you will no longer be able to
get the credentials again. Click on close, we can see that our user Ram is added
to the user page. Let us create another user, Sam. We follow the same process. Add the username Sham. Allow AWS access via console. Leave all values to default except require
password reset. And click next, we will not
be adding any permissions. Click Next, we will
add the sample tag, same as we did for user
department finance. Click Next and click Create a User Download CSB
to save credentials, please note again that
you won't be able to get credentials after
closing this page. Now that we have
created our two users, Ram and Sham, let us
create our users group. Click on User Groups
and then click on Create Group to create
our first user group, add the name developers. We can add users to this
group from the table below. Let us add Ram, who is a
developer to this group. The next step is to add
policies to this group. We want our developers
to be able to fully access C two and S three. So we will check Amazon C
two full access and Amazon S three full access from the policies table here
and click on Create Group. We now have a developers
group that has only one. Let us create an
administrator group as well. Click Create Group, Enter the
group, name Administrators. This time we will not be adding sham to this group right now. Let us create a group
without any users. Let us now attach
policies to this group. We will attach the
administrator access policy. Click Create Group, and we can see that our
group are created. But the administrators
group does not have any users as of now. Now let us add Sham to this. Go to the user's page. It has both the
users Ram and Sham. Click on Sham, you can see that this user does not
have any permission. Move from permissions to
the group stab groups. This will give you a
list of user groups. Let's add Sham to his
administrator group here. Now if you go back to
the permission stab, you can see that the
permission has been added to this user sham now has the administrator
access policy attached from the
administrator group? We have created two groups
provided with AWS access. These users assigned
AWS managed policies. That's all I wanted to
cover in this station.
16. Login with IAM User & Create Role: Hello students. Welcome back. In this lesson we
will log in with IAM user to the AWS
console and create role. I assume you have
completed the last demo and you have a user named
Sham and his password. Let's get started. Let us first login as administrator Sham and
use it to create a role. Now let's open the CSV file you downloaded while
creating user sham. Here you see this
contains username, password access key ID, secret key ID, and console login link as we have not selected
programmatic access. That is why the access
and secret key is empty. Let's copy the console log in link and open it in the browser. Here you see the account
alias is already filled, and this is something we have
created in another demo. You can also use account ID. In case you remember, add your username and
password as shown in the CSV. This will take us
to the AWS console. Now we can see the AWS
console as before, but if you see the top right
corner of your screen, you can see that we are
logged in as a user sham. Navigate to the IAM console. Again, click on the Roles option from the left dashboard and click on the Create Role button. The first option we have is
to select the trusted entity. This means we should select
what this role is for. Who or which service is
going to use this role. The first option is
the AWS service. You should choose when you
want to assign this role to AWS services like EC
Two Lambda or ECS. Which is what we will do here. The next is the AWS account. This is used to
create a role that can be accessed from
other AWS accounts. If you want someone
to be able to login from another account to
yours, choose this option. The next option is B entity. This is when you want
users to be able to assume this role and access AWS console via a B entity like
Google Facebook login. The next option is
SAML 2.0 Federation. Saml is a user Maintenance
service that is used by big organizations to allow their employees access to different services in
the organizations. This can be integrated with AWS to give access
to the AWS console. The final option we have
is a custom trust policy. You can use this to define
a Jason trust policy. Don't worry if the
last three options did not make sense to you. The first two options are
important for this course. Choose AWS service
and select EC Two. From the options below, we will create a role that can be attached to an
EC two instance. Click next, here we can see list of policies we
can attach to this role. Attach the Amazon S three
full access policy. And click next,
enter the role name. You see a description already filled with a
meaningful message. Allow EC two instances to call AWS services
on your behalf. Review the role and
click on Create Role. It will take a few seconds. Now you can see the role you
created on the roles page. We will be using this
role in upcoming demos. Students, IAM users are the best to give access
to your AWS account, even you should use it. I will recommend
creating an IAM user for yourself with
administrator permission and use going forward. And also make sure to enable
MFA for IAM users as well. I believe you already deleted your root user access
and secret key, which we created
in the last demo. At this stage, you
know you can also create an access and secret
key for your IAM as well. You can find the link in
the resource section. That's all for this session.
17. IAM Policies: Welcome back students. The last session was all about IAM users groups, and roles. We also briefly discussed IAM policies to
give permissions, but that's not enough. Understanding
policies is crucial to effectively working with IAM. In this lesson, we will talk about IAM policies in detail, including their
definition and types. Let's get started. Let
me ask you a question. What comes to your mind when
you hear the word policy? If you thought about rules
or guidelines. Got it right. Iam policy is more or
less the same thing. In the previous session, you would have
noticed that we gave certain permissions
to users groups and roles that we created. Then only they could perform the actions as per those
permissions granted. These permissions
are nothing but policies that we are going to
talk about in this session. By definition, IAM
policy is an entity, when attached to any identity or resources, defines
its permissions. As I earlier said, policies contain
permissions that determines whether a request
is allowed or denied. It means whatever action is
performed by an identity, the permission is
given by its policy. You can use the IAM
policy when you have to set the permissions for
an identity in IAM. Now let's understand this
with the same example. You have a team of
two developers, Ram and Sham, and two
administrators, Ta and Geta. You created two groups, developer group and
administrator group. As per the job role
and permission, the developers in
your organization can start and stop the EC two instance and
the administrators can create and delete
the EC two instance. C two is nothing but a
compute service in AWS. We will talk about it
in the coming modules. You will create two policies. One policy for the
developers that will allow list start and
stop C two instances. Another policy for
administrators which would enable them to create and
delete EC two instances. You will attach these policies
to the respective groups. What would happen here
is that the policy would be applied accordingly to
everyone in these groups. That means both Ram
and Sham will have permissions as developers and Ta and Geta as administrators. Hence, in this example, by attaching developer policy
to the developer group, we are allowing the
members in the group to perform specified actions
and nothing more than that. Now we know how policies work. Let's understand the different
types of IAM policies. If we go back to the
policy definition again, IAM policy is an entity that, when attached to any identity or resource, defines
its permissions. According to the definition, a policy can be attached to either identities or resources. As you already know,
identity is nothing but IAM users groups and roles. The resource means AWS services like C two instance S
three bucket lambda, which we are going to
cover in later modules. Based on the two ways a
policy can be attached. It is of two types, identity based policy and
resource based policy. Now we will understand
both one by one. Identity based
policies are attached to an identity that are IAM, user group and role. These policies control what
action and identity can perform on which resources
and under what conditions. Please note that these
policies are user specific, which allows user to access the AWS resources in
their own accounts. An IAM user group
or role can have multiple policies that together represent the permissions
for that particular user. For example, you can attach the policy to the IAM user named Ram stating that
he is allowed to perform the Amazon C two
run instance action. You can further state
that Ram is allowed to get items from an
amazon S three bucket. My bucket, combining
both the permissions, Ram can only launch C
two instance and access. My bucket. The other type
is resource based policy. These policies are attached
to an AWS resource, such as an Amazon C two instance S three bucket, et cetera. These policies
control what action a specified AWS service or identity can perform on other resources and
under what conditions. Let's understand
with an example. At times there will
be scenarios where EC two instance would like to
access the S three bucket. You also need to
attach the policy to C two instance and give the required permission
to access S three. Now further, identity based
policies are of two types, managed policies and
inline policies. The managed policy is
further classified in AWS, Managed policy and
Customer Managed policy. Now let's look at
them one by one. Aws managed policies are policies that are created
and managed by AWS. As the name suggests, you would be clear that
this policy is created and managed by AWS and you
cannot make any changes. For example, Amazon S Three Read Only Access or
S Three Full Access. Essentially, they are designed
to align closely with commonly used IT
industry job functions like administrator,
developer or reader. You can give S three full
access permission to the administrator and S three read only access
to the auditor. The objective is
to make granting permissions for these
standard job functions easy. Aws managed policies
cannot be changed and AWS will manage and update
the policies as necessary. Customer managed policies are created and managed
by the customer. Basically, as a customer, you can create and
manage these policies. If you want a fully
controlled policy that you can maintain, you can use a customer
managed policy, which you create and manage
in your AWS account. It provides more precise control over your policies than
AWS managed policy. Another advantage of
customer managed policy is that you can attach to multiple entities
within your account, making it much easier to scale. For example, you have four reader users
in your account for a different team that all need the same type of Amazon
C to read only access. You can create one
customer managed policy and attach it to all four users. If you need to
change that policy, you can change it in one
policy which applies to all it provides you built in and centralized
change management. For those managed policies, you have complete control
and you can manage permissions at any level
you feel is necessary. Next and last is inline policy. Inline policy is directly
attached to the user or group. It maintains a strict one to one relationship with
policy and identity. That means an inline policy is attached to a single entity. It is generally created
when you create an identity and deleted when
you delete the identity. It is beneficial
when you have to give permission to a
temporary user or group. You create an inline policy while creating a user or group. And when you delete the user, the policy will also be deleted. All right, that's pretty
much about the policies. Let's quickly summarize what
we learned in this session. A policy is basically
permission that decides what action should be
taken in the AWS accounts. And we learn about the different
types of IAM policies, which are identity
based policies and resource based policies. Identity based policies are further divided into two types, managed policies and
inline policies. Managed policies could be AWS managed or customer managed.
18. IAM Best Practices: Hello, students. Welcome back. In this video, we will learn a few IAM best practices to ensure the security
of our AWS accounts. Let's get started. The first IAM best practice is to avoid sharing
login credentials. Instead of sharing
root user credentials with different users
in your organization. You should create IAM users for individuals who are going
to use your AWS resources. Let's understand this
with an example. Suppose you have two developers and two administrators
in your team. What people sometimes
do is create one IM user and share user credentials with
all these four people. But it is not a good practice. Instead, we should create four different
users according to the roles of these team members and share the user
credentials with them. The objective here is that
every user should have their own IAM credentials
to access your account. It does not matter whether they are accessing
the account through management console or
using AWS CLI or any SDK. The next best practice
is to create groups. It is always recommended
to create groups and assign only the required
permissions to these groups. That means if there are multiple users in
your organization with the same job role, instead of assigning permissions
to individual users, you can create a group, assign a permission
to this group, and add all the users who have the same job
role in the group. Every user will assume
these permissions. Let's understand this
with the same example. Suppose you have two developers and two administrators
in the team. Instead of giving permissions to these four individual users, you should create two groups. One for the administrators and another for the developers, and add that required
permissions to these groups. You can now add users to
their respective groups. The users will
automatically assume the permissions assigned to
their respective groups. This practice will
help you manage the permissions at
the group level instead of the user label. Let's move to the next best practice, which
is permission. Aws recommends that we
should always create permission for what the user needs according to their role. These permissions are known as least privilege permissions. For instance, if a user's job is to start and stop the
EC two instances, then instead of giving
full access to EC two, we should only
give permission to start and stop the
EC two instance. We should always create least privilege permission and policy while
designing permission. Now going forward, the one more best
practice is auditing. Aws says that we
should always enable cloud trail inside each
and every AWS account. If you don't know Cloud Trail, it is an AWS service that's free of cost,
except for storage. Whenever a user interacts
with your AWS account, Cloud Trail will log each and every activity which eventually will help you during audit. We will learn Cloud Trail in detail in AWS security
and Management module. Let's understand
with an example. Say a user logged into
your AWS account, created an EC two instance
and then terminated it. Aws Cloudtrail will log
all the three activities. This way you will
have a record of all the activities that users are doing
inside your account. The next best practice in the list is adding
password policies. Aws says that you should always configure
password policies, which we have done in
the previous video. Password policies ensures that every user has a
strong password. The next best practice is MFA, multi factor
authentication. You should always enable multi factor authentication
for privileged users. I would recommend that you
enforce MFA for every user. You can create a policy and attach that policy
to all groups. Now, whenever any users
log into an AWS account, they will have to authenticate
their identity twice. Then only they will be able to do anything in your account. This is very good
practice to ensure your account and AWS
resources are secure. The next best practice is to rotate the login credentials. Everyone accessing
your AWS account needs to change their user credentials every now and then. You can even enforce
this policy on the users that they will have to rotate their credentials
every two months, or three months, or six months. This is a part of your
password policy inside your AWS account that we have been seen in
the previous demo. Under this policy, user will not only have to rotate
their passwords, but the access and
secret keys as well. Last but not least is root user. You should not use root
user for your day to day activities as it might expose
your root user credentials. Additionally, you should
also limit the access of the root user as soon as
you have an AWS account, store the root user
credentials somewhere private, and create an IAM user,
even for yourself. Students, I hope
you now understand that account security
is a real issue and you should employ
these practices to safeguard your account
from any possible threat. That's all for this video.
19. Overview of Amazon S3: Welcome back. In this video, we will learn about one
of the core AWS services, the Simple Storage Service, known as S three. We will start with
the S three overview. Understand the S three
bucket and also the folder. Let's start with what is S3s3 stands for Simple
Storage Service. As you can see the
first characters of Simple Storage Service RS, that's why it is called S three. It allows you to store, retrive access and
back up any amount of data at any time from
anywhere over the Internet. S Three is a perfect
storage solution for storing massive
amounts of data. For example, audio
files, movies, large scale photo storage, big data sets, et ceteras. We can access S Three either
from management console, AWS CLI or AWS SDK. Now let's talk about how
we store data in S three. To store any data
inside S three, we need to create a bucket
inside an AWS account. An S three bucket is
a logical container in which data is
stored inside S three. To store any data in S three, we need to create a bucket. We can also create multiple
buckets to organize our data. Bucket is identified from
its name and it should be globally unique across
all regions and accounts. For example, if I create a
bucket and name it my bucket, then no one else can use
it in any AWS account. In other words, once an S
three bucket is created, no other AWS account can create the S three bucket
with a similar name. S three bucket simply works as a storage in which we can store unlimited data either
directly inside the bucket or create a folder and then put the data
into that folder. Now let's understand
what a folder is. Folders are used for grouping
and organizing files. Unlike a traditional
file system, Amazon S three does not use hierarchy to
organize its files. For the sake of
organizational simplicity, the Amazon S three
console supports the folder concept as a
means of grouping data. You can have multiple folders or files inside a single bucket. Basically, the folder is a
name space inside the bucket. Let's understand
with an example. Inside the AWS account, we have S three service. In this service, we can create buckets as we have
learned so far. Inside this bucket, we
can store our files, images, videos, or data. Moreover, if we want
to organize our data, we can create folders inside the bucket and also
folders inside folders. Then we can store our data
inside these folders as well. It will make more sense when we will do the demo in
the next lesson. Another important concept
in S three is object. When we store any data that
could be files, documents, images, videos, et cetera, they are known as
objects in AWS. Hence, the object is nothing
but your file document, image, video, et cetera. Anything sitting in your hard
drive is known as a file. But when you upload
this file into S three, it will be known as an object. And the bucket in S three is similar to the file directory
in your hard drive. Now we will talk about the characteristics of objects
that you need to know. The maximum size of a
single object that you can upload to S three
is 5 terabytes. Please note that
five terabyte is not the size of total
data that you can upload. Instead it is the
maximum size of a single file that S three
allows you to upload. But that doesn't
mean that you cannot upload objects bigger
than 5 terabytes. If you have an object
bigger than 5 terabytes, then you have to split it
into multiple parts and then use something called a multipart upload
to upload them. The max object size
is 5 terabytes, but you must use a
multipart upload as soon as you upload an object of
size more than 5 terabytes. Another characteristic
is object worsening. It is nothing but keeping
multiple variants of the same object when we are
making some changes to it. In S three, you can objects
to protect them from any unintended actions or even the accidental
deletion of an object. This means that you always retain the previous
versions of an object. The next characteristics
is the storage class. You can create multiple
buckets and store them across different
classes or tires of data. Classes are different
levels of storage with different costs based on how frequently you
use your data. You can choose a class if you
don't use your data much, you could choose a cheaper class and leave your data
there and so on. We will talk about it in
detail in the coming. Last but not least, you can then create
permissions to limit who can access
or see your objects. One of the most common policies
in the bucket policy is a resource based policy
which you can use to grant permissions to your
bucket and its objects. In this session, we looked at the S three bucket
folder objects and their characteristics.
20. S3 Availability, Durability and Data Replication: Hello students. Welcome back. In this video, we are
going to understand three important concepts
which are Availability, durability, and data replication that will be used a lot
in the next session. While understanding
different storage classes, let's get started
with availability. It refers to the system up time. This simply means that for how long the system
is available. In other words, it's the
time for which the system is operational and able to
deliver data upon request. We measure this
availability in percentage. This is also known as SLA, which stands for Service
Level Agreement. It is basically a promise
that service providers have with their customers for the availability of their
services or systems. If they keep their promise and customers get everything
within the promised time, that means the SLA is completed. Let's take an example. Supermarkets say they
are open 24 into seven. It means it's available
round the clock for you. If they keep their promise and are open throughout
the day, week, month and year, then
we can say that their availability is 100%
and they fulfill the SLA. The next concept is durability. It refers to long
term data protection. That simply means how
well is your data protected from any possible
loss or corruption. In addition to availability, your data should not
be lost or corrupted if you store it in a storage
device for a long time. Durability is also
measured in percentage. Let's understand this
with an example. Suppose you stored 1,000 kilograms of potato in cold
storage for six months. When you've got them back, it was only 800 kgs. It means either 200 kgs of potatoes were rotten or
eaten by rats, right? That means the cold storage
service was not durable. Next time you will make sure to store potatoes in a well
equipped cold storage, there will be no loss. This is called durability. Now we understand what data durability
and availability are. It's also important to know how data gets stored in S three. It is based on a concept
called data replication. Let's understand this. If you remember
from the last lab, we have created
an S three bucket inside the Mumbai region
and uploaded an image. When you upload any
data in your bucket, this data will go to one of the availability
zones in the region. In our case, we created a
bucket in the Mumbai region. Our image was uploaded in one
of the availability zones. But to secure our data, AWS makes additional copies of our data and replicates it
in other availability zones. The data gets stored
and replicated in all the availability Jones
within the same region. Now the question arises, Why does AWS do that? Here we need to
remember that AWS promised 11 nines
of data durability. It simply means AWS promises
99 points additional nine, number of 9% data
durability guarantee. Aws does not ensure
100% data durability, but they say that there is
a 99 point additional nine, 9% chance that your
data won't be lost. How does AWS make sure when we are uploading any
data inside S three, it won't suffer
any loss for that. Aws maintains copies of our data in all the availability Jones in the same region. In case of any data loss, they recover the data from other availability
zones to the lost one. That's how data is always present in all the availability
zones of one region. And that's the reason as promises 119 of durability
in extreme cases. If any availability
zone goes down, our data will still
be available in other availability zones
of that particular region. Okay. This is one of the significant benefits
of using Amazon S three. This is a simple example of
when we upload new data. Now let's understand what happens when we make any
changes to our data. Suppose we have this image
in three availability on, and we changed the color of
this image and uploaded it. Then AWS automatically updates the remaining two
copies of your data. In other Jones, the same
goes for delete as well. When you delete your
data from One Joe, it will be deleted from all availability
zones in the region. Students, we understood data availability
durability and how AWS promises 11 ninths of data durability
with data replication. That's all for this session.
21. Storage Classes: Hello, students. Welcome back. In this video, we will understand different
storage classes. This session will
be full of facts. I recommend you all make notes. We have added some supplementary
learning resources in the resource section. Please make sure that you make the best use of them.
Let's get started. In one of the previous videos, we stored image in an
Amazon S three bucket. While uploading the object, we noticed that the
storage classes were showing under
the properties. Aws offers various options to store our data in the cloud. These options are known as
storage classes and we can choose any class based on how frequently
we access our data. These S three storage
classes are purpose built to provide the lowest cost storage for different access patterns. It means different types
of storage classes are designed to fulfill different
purposes of storing data. Students Cloud engineer,
it is very important to understand these classes as they can save your
time and money. Let's understand
them one by one. The first one is
S three standard. It is a general
purpose storage class used for frequently
accessed data. It offers high durability,
availability, and performance. S three standard is designed for 99.999 availability
and 11 9% durability. In other words, they are just 0.001% at your data
might be lost and so on. It gives you low latency
and high throughputs and it can sustain up to two
concurrent facility failures. That means if two availability
junes have problems, your data is still safe as it delivers low
latency and high throughput. S three standard is appropriate
for a wide variety of use cases like
cloud applications, dynamic websites,
content distribution, mobile and gaming applications, and big data analytics. The next one is the infrequent
access storage class. As the name suggests, this storage class is used for less frequently
accessed data. A data you only access
once in a while. But you also want this data to be immediately available
whenever you need it, it must be available as
soon as you request for it. If that is the case, you can use the infrequent
access storage class. It is designed for 99.9% availability and
11 9% durability. Availability of infrequent
classes is lower than the S three standard because we don't access our files
very frequently. It is cheaper than the
Amazon S three standard. But when we access the files, we pay a per GB
retrieval charge. This is the tread
off in this class. We pay less for storage
but need to pay a retrieval fee every
time we access our data. This low cost and high
performance combination makes S Three standard
infrequent access ideal for long term storage, backup and disaster recovery. Our data in this class
is also resilient against the events of an entire availability
zone failure. Next, we have one zone
infrequent access. This is very similar to the
infrequent access class, but the only
difference is that in one zone infrequent
access class, the data is stored in a
single availability Joe, whereas in the infrequent
access class it is stored in at least
three availability zones. Basically, your data
is less available because it is present in
just one availability, June. The best part is
that you still have low latency and high
throughput performance. Also, since the data is stored
in one availability June, it is 20% cheaper than
infrequent access. S31 zone infrequent
access is ideal for customers who need
a lower cost option for frequently accessed data, but do not need the
availability and resilience of S three standard or S three standard
infrequent access. This is a good option for storing secondary
backup copies of on premises data or the data
which can be easily created. Again, you can also use S three cross region
replication to cost effectively store data that is replicated from
another AWS region. The next is S three
Intelligent Tiring. As the name suggests, this storage class comes with an inbuilt intelligence
which automatically moves data to the most
cost effective access tire based on their access pattern. Here, the access
pattern indicates how frequently one
data shat is accessed. Here the objects will be moving automatically
from one storage class to another storage
class based on the changing access
patterns of these objects. As a result,
frequent access data will be moved to the
frequently accessed storage and infrequently
accessed data moved by S three intelligent tiering
into the correct category. It has the same low
latency and high through put performance
as the S three standard. But this is highly
cost optimized. For example, if data is moved to the infrequent
access class, it will save up to
40% on storage costs. If it is moved to
the glacier class, it will save up to
68% on storage costs. It is also designed
for 119 durability and 99.9% availability over a given year across
multiple availability. Jos, as a user of this service, you only have to pay a small monthly monitoring
and auto tiring charge. This will give you the
most cost optimization without thinking
too much about it. It is also resilient against events that can affect
an entire availability. Joe. Now the next
storage class is Glacier Amazon S three. Glacier is the lowest
cost storage class in S three and supports long term retention and
digital preservation for data accessed once
or twice per year. It is helpful for customers, particularly those in
highly regulated industries such as financial services, health care, and public sectors. These industries
retain data sets for seven to ten years or longer to meet regulatory
compliance requirements. It is designed for
119 durability of objects across multiple
availability zones. As the name suggests, you expect that your
data will be frozen, which means it will be maintained for a
longer period of time. It means object retrieval
validity is very long. Now, AWS supports three types of glacier for a different access
pattern moving forward. Let us now quickly compare
different glacier types. Amazon S three, glacier
instant retrieval. Three, glacier instant retrieval offers a retrieval time
in milliseconds with the same performance as S three standard Amazon three,
glacier flexible retrieval. In glacier flexible retrieval, the retrieval time can be configured from
minutes to hours. With free bulk retrieval, it is best suited for backup and disaster
recovery scenarios where large amounts of data need to be retrived quickly
and cost effectively. Amazon Es three
Glacier Deep Archive. In the Tri Glacier Deep Archive, the retrieval time of
data is within 12 hours. It is the lowest cost
storage class designed for long term retention of data that will be retained
for seven to ten years. It's an ideal alternative
to magnetic tape libraries. Students, I hope you now
understand the storage class. In case you have any doubts, I recommend you go through
this session one more time.
22. S3 Bucket Demo: Welcome back. In
this demo session, we will learn how to create an S three bucket holder
and upload data into it. Let's go to the AWS
Management console. Remember, we first need
to select the AWS region. I will select the Mumbai region. After that, we will go
to the S three console. There are three ways we can find the S three
service in the console. The first method is by
searching for S three in the search box and it will
give us S three in the result. The second method is by finding the S three in the
all services menu. Just expand the menu, locate the S three service, and click to open it. Finally, if you have previously used S
three in your account, it will be listed under the
recently visited services. Let's type S three here
and click to confirm. When we first arrive at
the S three console, we will see a list of buckets
within the AWS account. Since S three uses a
global name space, you don't have to select a region while
using the console. Now if you are thinking, why did we select the region? S three is a global service. Well, selecting the
right region every time when you are working in your account
is a good practice. It does not matter whether the service is region sensitive. Now let's move ahead and
create an S three bucket. To do that, click on Create
Bucket and choose a name. Now one thing to note here is that the bucket name
should be unique. To make this name unique, let's add some random
numbers at the end. Now there are some rules that we need to follow
while naming buckets. The first rule is
that the bucket name should be 3-63 characters. Second, the name must
have lower clatters, numbers, dots, and high pens. It should begin and end with
either a letter or a number. Last, your bucket name
should be unique across AWS. Select the region where you want your bucket
to be created, and leave the rest of the default settings and go
ahead and create the bucket. Now the bucket has been created, and it did not take
more than a second. On this page, you can
see the bucket name, the region where you created the bucket and access
to the bucket object. Now let's go to the bucket. Now here you can see the different tabs
including properties, permissions, metrics,
management, and access points. Here is the object
inside this bucket. This is the place where
we upload our files. As we have learned
in the last session, we can either upload objects
in the S three bucket or we can also create folders inside the bucket to
organize our data. Let's go ahead and create one
folder and name it as demo. Now we can upload our object in the
bucket or this folder. Let's click on Upload. Here we have two options. Either we can upload
files or upload folders. We will upload a file here. Click on Add Files. Here we can do some additional
settings to our object. Just click on Properties and
you will see Storage class. It is standard by default. We will cover this in
detail soon in this module. Let's keep this for now
and click on Upload. Now our file is
uploaded successfully. I will exit by
clicking on Close. Now I have one object, my S three bucket with
the name demo dot jpg. If we click on our
object demo jpg, it will take us to
the object dashboard, where we will see the object properties permission
and version. Let's go into the
Properties tab. Here we can find all the
details about our object. Like who is the
owner of the object, in which region the
object is uploaded, and when was the
object last modified, The size of the object, the type object key, the URL, the Amazon
resource name tag, and here is the object URL. It is the link to the file. We can open it by
clicking on this, but let's open it in a new tab. As you can see, I get access denied because this
bucket is not public. And I don't have the right
to access this object. If the bucket is not
public and you try to open the file using its public
URL, it will not open. Now let's go back to
the object dashboard. We have another way to open this object just by
clicking on Open. It will open a new tab and
show us our demo picture. Now you must be thinking, why does this work, even if the bucket
is not public? Well, when we open an
object through the open, it opens because this
is a unique URL. As you can see, the URL is much, much longer and not
similar to a public URL. This is a special URL for me
as an owner of this object, I am authorized to access
this object through this URL. Now the last thing I
wanted to show you is how to delete an
object from S three. For that, I will go
back to the bucket, select the object,
then click on delete. Here you will have
to type delete to confirm that you want
to delete the object, then delete object, this will permanently
delete this object. Students, I hope you now have confidence in S
three and its scope. In this session, we
have seen a demo where we have created an
S three bucket holder, uploaded object successfully
and deleted them. Now go ahead and do it yourself.
23. Overview of EBS (Elastic Block Storage): Hello students. Welcome back. In this video we
will learn about one of the storage options
for EC two instances, the elastic block storage, also known as EBS. We will start with the
definition and basics of EBS followed by its use cases and
types. Let's get started. Amazon EBS is like a hard
drive in the cloud that provides persistent
block storage for Amazon EC Twin Stances. Ebs storage is referred to as EBS volumes in
AWS terminology. Therefore, we will call it
EBS volume from now on. You can attach EBS volumes to your EC twin stances and create a file system on top
of these volumes. Run a database or
server on top of them, or use them in any other way, a block storage would be used. But what is block storage? It is a technology
that chops data into blocks and stores them
as separate pieces. Each data block is given
a unique identifier, which allows a storage
system to place the smaller pieces of data wherever it is
most convenient. A block storage volume works
similarly to a hard drive. You can store any files on it or even install a
whole operating system. Ebs volume is a network
attached drive. What does it mean?
Well, it means that the EBS volume is not actually attached to
your C to instance, physically, it is attached
via a network inside AWS. Since it is attached
via a network link, EBS volume can easily be
attached or detached. For C two instances, this means that you can
move your EBS volume from one running instance
to another and the change will happen
within a matter of seconds. You can think of EBS as a computer hard disk
that can be attached to any detached from
an instance and immediately reattached to a different instance
via network. Now, a question arises here about why this is called
elastic block storage. When you provision
an EBS volume, you have to define
its size and IOPS. Iops simply means input
output operations per second. You have to tell AWS that you want an EBS volume
of say, 100 B, that can do 1,000 IOPs, that is, 1,000 input output
operations per second. But just because you
define that EBS is 100 GB, does not mean that you are
stuck with 100 GB drive. You can change the size
or the IOPS capacity of EBS volumes as per your
requirement at any time. This is the elasticity
that EBS provides, and this is why EBS is an elastic block storage and
not just a block storage. Now let's look at
some EBS features. An important point to
note about EBS is that it can only be attached to one
C two instance at a time. You cannot have two
C two instances connected to a
single EBS volume. However, AWS recently
released a feature where IOPS SSD EBS volume can be attached to
multiple instances. We do not need to go into
detail in this course. Essentially, EBS
is a network drive that can be moved from
one instance to another. But this process
has a limitation. Ebs volumes are restricted
within one availability zone. This means that although
the EBS volumes can be quickly moved from
one instance to another, they cannot be moved across
the availability zone. For example, an EBS volume
in the availability zone P, south one A, cannot
be attached to an EC two instance
in AP south one B. It can only be moved between
instances in AP south one A. However, if you do want to move EBS volumes from one
availability zone to another, you have to take
a snapshot of it. We will learn about
snapshots in a bonus topic. Types of EBS volumes. Let's move forward
and understand the different types
of EBS volumes. Let's take an example here. One of the important
criteria while buying a new laptop
is storage, right? When you talk about
computer storage, it means hard drives. Your computer hard drives
could be of different types, like SSD, HDD, et cetera. Similarly, EBS volume is also
classified into six types. General Purpose SSD,
GP two and GP three. Vicent IOPSSD, I 01 and I 02 through optimized SD one HDD, Old HDD, S one HDD. One thing to note here is that only the SSDs can be used as the system boot volume
out of all these types. Boot volume is the volume that contains the
operating system. Aws only allows the fast
SSDs as a boot volume. That's all we need to know
about EBS and EBS volume type.
24. Overview of EFS (Elastic File System): Hello students. Welcome back. In this video we will
learn about one of the storage options
of EC two instances, the elastic file system, also known as EFS. We will start with
the basics and a small description of EFS
followed by its use cases. So let's get started. What is the file system? Before understanding
what EFS is, you should know what
a file system is. A file system defines
how files are named, stored, read, and used
in any storage device. It is installed on
every storage device, whether it is a
disk drive or a CD. Every time you perform any operation on a
file on your computer, like editing, deleting, or reading your operating systems, file system handles it. Storage is simply a
massive chunk of data. Without a file system, because without a file system, the operating system will never know where to look
for your data. Imagine a storage room full of documents and pages
all scattered around. You cannot tell which
pages are related to which pages belong to
the same folder, right? It is just randomly
scattered data. This is how a storage device will be without a file system. In comparison, a library
is a real life example of a file system where every
document is organized. Like a library has a system. Your operating system
has a file system that the computer where
to find your files. Operating systems have an
industry standard file system that is managed as
a part of your OS. However, it is
always possible for you to make your own file
system for applications. Now that you know what
a file system is, let us talk about EFS, Amazon EFS is a cloud based
file storage service. As the name suggests, it is a scalable
storage solution that automatically
grows and strings, as you add and remove files, do not need to
tell AWS to create a file system of
100 GBs or 200 GBs, you simply create an EFS and it grows and shrinks as you
add or delete files. This ensures that your EFS always has the storage
as per your needs. It can grow from
a small megabyte to a petabyte scale file
system automatically. There is no minimum
fee for using EFS. You simply have to pay
for the storage you use. If you use only 500 MBS of EFS, you only pay for the 500 MBS. Similar to EBS, EFS is also
a network attached service. This means that your file
system storage is attached to EC two instances via a
network and not physically, but unlike EBS, which could only be connected to one
EC two instance at a time, we can connect EFS to any
number of EC two instances. One of the most important
features of EFS that it stores your data
across availability zones. You can attach EFS with multiple EC two instances
across availability zones. Let's say you have one
EFS in the Mumbai region. This EFS can be used
by any number of EC two instances from any availability
zones in this region, not just EC two. You can also use EFS
with services like ECS, EKS, and Lambda functions, or even your local computer. You can simultaneously connect hundreds of EC two instances, lambda functions and your
personal computers to EFS. As of now, EFS is only compatible with Linux
based instances. It is important
to note here that Windows instances
cannot use EFS. Aws has a different
file system for Windows instances called SX. Now you must be
wondering why use EFS and not create
your own file system. Yes, creating a file system on any storage
device is possible. But apart from being a
highly escalable service, EFS is also a managed service. This means that it manages the storage
infrastructure for you. And as a user, all you have to do is
use the file system. You do not have to worry about the underlying storage or any maintenance or patching
of your file system. Efs will manage all of this extra work for
you anytime there is a requirement for a
file system storage that can be used across multiple services
and local computers. Think EFS, that's all
for this session.
25. Introduction to Networking: Hello and welcome back to the fifth module
of this training. In this module, we
will cover the VPC, which is the networking
section of cloud computing. Here we will deep dive into the topics like virtual private, cloud or VPC subnet, Internet Gateway route table and Nat gateway security
groups and NSL. We also do a demo in AWS to create a VPC and public
and private subnets. Upon successful completion
of this module, you will have a complete
understanding of creating VPC's and making private
and public subnets. You will know how the public
and private subnet works in VPC and how the Internet Gateway connects a VPC
with the Internet. You will also learn how to
use security groups and NACL to control traffic to
and from resources in a VPC. Students. To get the most
out of this training, we recommend that you take notes throughout
this training. I hope you are
already making notes. That's all for
this introduction.
26. What is a VPC?: Hello students. Welcome back. In this video we
are going to learn about virtual private
cloud or VPC. In short, it is the networking
aspect of cloud computing. And it may be a little difficult to relate it
with our day to day life. We will try to make it
as simple as possible. Since this is a very
important section and building block for the rest
of the topics in this module. I recommend that you pay special
attention to this topic. Let's take an example. Suppose there is a
residential building and it has many flats. There are hundreds of people
living in these flats. Let's say I own flat number 810 and you own flat number
B 20 in this building. We can also say that flat number B 20 is
your private section of the building where you keep your stuff and you have
complete control over it. You can also add security
features such as a lock camera, surveillance, et cetera, in the same way other people have private areas that they own. Now let's connect this
example with AWS and VPC, and replace the building
with AWS flats with VPC's. In our AWS account, we create and
manage all types of resources like EC
two RDS and so on. We will get into what these specific resources
are in the later modules. These resources in your VPC are just like your
stuff in your flat, the AWS VPC is like
your private area in AWS in which you can place
your resources and services. Just like an apartment has
different level of security, you can also put a level of security over
these resources. In other words, we can say
that there are ways for you to either grant people access to your database or your
cloud resources, or you can also prevent
them from doing so. Now let's understand the
definition of AWS VPC, a Virtual Private Cloud. A virtual Private Cloud is a virtual network dedicated
to your AWS account. It is logically isolated from other virtual networks
in the AWS cloud in which you can launch your AWS resources such as
Amazon EC two instances. Vpc stands for Virtual
Private Cloud. It is nothing but a virtual data center on
the Amazon Wab services. This is private because it's only for you and you have
complete control over it. This is a completely isolated logical network that you can use the way you want
when you create VPC. Basically, you create a
logically isolated network. If I am creating a VPC in my account and you are creating
a VPC in your account, these are two isolated networks. Even two or more VPC's with different IP addresses within the same account will
also be isolated. You can understand
the IP address as the address of your flat. This IP address is called CIDR, which stands for Classless
Inter Domain Routing. It is a set of Internet
Protocol standards that are used to create unique
identifiers for networks. When you create a VPC, you specify a range
of IP addresses for the VPC in the form
of a CIDR block. For example, 10.0 0.0 0.0 by 16. As of now, you understand VPC is a network within
your AWS account. You can also create subnetworks within a network which are
also known as subnets. We will cover subnets
in the next la.
27. Subnet, Internet Gateway, Route Table, & NAT Gateway : Hello students. Welcome back. In this video, we are going
to understand subnet, Internet gateway and route
table. Let's get started. As we discussed in
the previous video, VPC is a network in
our AWS account. We can also create
subnetworks inside our VPC. These subnetworks are
also known as subnets. Let's look at our
previous example. We own a flat in the apartment, and this flat is our
private section. We have a bedroom,
a living room, and a kitchen in this flat. Essentially, these are
different subsections of our flat used for
various purposes. Similarly, in AWS, VPC, we create a subnetwork like public submit
and private submit. When we create a submit, we specify the CIDR
block for the submit, which is a subset of
the VPC CIDR block. If the CID of our VPC
is 10.0 0.0 0.0 by 16, we can use 10.0 0.0 0.0 by
24.10 0.0 0.1 0.0 by 24. As the CIDR of our Submit, we can launch AWS resources into a specific submit such
as easy to instances. One important point
to remember is that each subnet must reside entirely
within one availability, Joe, and cannot
span across Jones. This we will see while
doing the hands on map, there are two types of subnet. Public submit and
private submit. We could relate it to our flat. We have a guest room with
direct access from outside. It is like a public submit, while our bedroom is
completely private for us. Now let's understand
what public subnet is. It is a subnet that
interacts with the Internet and can be
accessed through the Internet. It means any resources created
within your public subnet, for example, Wab servers, would be accessible
through the Internet. Since public subnets
interact with the Internet, we will deploy our load balancer or Internet facing applications
in the public submit. I will cover this in detail
in the EC two module. The next is private subnet. It is a subnet that cannot be
reached from the Internet. It means we can create
the AWS resource which are only used inside the
VPC for internal purposes. Now let's put it all together. We have a region in
our AWS account, and there will be a
VPC in that region. The VPC will have what's
called a CIDR range, which is a range of IP addresses
allowed within our VPC. The VPC can also go across
two availability zones, Availability Joe one and
Availability Joe two. Availability Joe one contains a public submit and
a private submit. Similarly, Availability Joe Z two also contains a public
submit and a private submit. Hence, we have two
availability Jones, one VPC and four subnets. In this example now
you must be thinking, what is the difference between a public submit and
a private submit? Well, there is no difference
in these architectures. The process of creating
them is exactly the same. But what you
configure inside them determines whether a subnet
is public or private. You need to make two changes to a subnet to make it public. First, you need to add
an Internet gateway. It is an AWS managed component that is attached to your VPC. It acts as a gateway between
your VPC and the Internet, basically the outside world. Let's add an Internet
gateway here. We now have an Internet
gateway attached to our VPC, which will establish
connection from the Internet. Now if you are thinking that a public submit now has
access to the internet, well that's not true. There is another component
called router inside VPC. Which determines where the incoming traffic
will be directed. This process is
known as routing. Router uses something called route tables to control
network traffic. Therefore, each subnet inside VPC must be associated
with a route table. Whenever we create a subnet, it will come with
a default route, which is a local route. As we can see, this route table has a destination field
and a target field. The destination field contains the destination address that
you are trying to reach. The target specifies the
route, that destination, that means any request within the VPC IP address is local. To make a public submit, we need to add destination
address as Internet, which is 0.0 0.0 0.0 by zero, and the target as
Internet gateway. This new route here in the
route table has a destination of 0.0 0.0 0.0 by zero. It means that any IP address not known within the route table
sends it to this target. In this case, this target
is the Internet Gateway. This part here is simply the
ID of the Internet Gateway. It means any traffic other than the VPC network will go
to the Internet gateway. And hence, this submit
is now a public submit. Now these two subnets are public subnets and the rest
two are private subnets. Hope that's clear to you. If you have an instance
in a private subnet, it will not be accessible
through the Internet. However, you might want
to give it access to the Internet to download files or to get the
operating system updates. In this case, we can use what
is known as a At Gateway, which is managed by
AWS or instance. It can also be self managed, that way you can access the Internet within
your private subnet. At Gateway is a highly
available AWS managed service that enables your instances in private subnet to
connect to the Internet. We create a At Gateway
in our public submit. We create a route from
the private subnets to the At gateway and from the At gateway to the
Internet gateway. This will allow your
private subnets to get Internet connectivity. That's all for this session. I hope you now understand
BPC and its elements. In case you have any doubts, I recommend you go
through this session one more time as this is very important for your
certification interview and in general also.
28. Security Groups & NACL: Hello students. Welcome back. In this video, we will learn why we need a security
group and NCL. What are these and the
differences between them? Security groups and NSLs both
act as virtual firewalls. They control traffic to
and from the resources in a BPC by following some
inbound and outbound rules. Inbound or outbound represents the direction of the
traffic between networks. The direction is defined with respect to a reference network. Inbound traffic refers to the information coming
in to a network. Here, the reference network
is your internal network. It could be your computer. Also, as you can see, data is flowing from the
Internet to your computer. It's called inbound traffic. Outbound traffic refers to the information going
out of the network. Again, here you see data is flowing outside from
the reference network which is your computer. Again, data traffic flow is from your network
to the internet. It's called outbound traffic. Now let's come back to
security group and NSA. We will begin with the
security group first. A security group is an
AWS firewall solution to filter the incoming
and outgoing traffic from an EC to instance. It acts as a virtual firewall
for C two instances to control the inbound
and outbound traffic based on some defined rules, it ensures instance
level security. Both inbound and outbound
rules work independently. For example, inbound
rule might allow traffic from a specific IP address
only to access the instance. Whereas an outbound rule might allow all
traffic to go out of the instance as security groups function
at the instance level, In a VPC, we can apply a security group to one or
more instances as well. Similarly, an
instance can also be associated with one or
more security groups. Let's look at NSR now. It stands for Network
Access Control, which controls the
traffic to or from a submit according to
some defined rules. This means it adds an additional layer of
security at the submit layer. For example, an
inbound rule might deny incoming traffic from
a range of IP addresses, whereas an outbound rule might allow all traffic
to leave the submit. As NSLs work at the
submit level of a VPC, we can apply a NCL to
one or more submits. However, each submit must be associated with a
single NCL only. Now let's have a look what is the difference between the
security group and NSL. Security Group operates
at the instance level, but NSL operates at
the submit level. Security group support
allow rules only, but NSL allow rules
and deny rules. Security group is stateful, means return traffic is automatically allowed
regardless of any rules. As we have discussed already, there is no way to
add an outbound rule. All allowed traffic returns by default it maintains
state of allowed traffic. That's why it is called
state full NSL is stateless. It means return traffic is
not allowed by default, there should be explicitly
allowed rules to allow them. Security group applies to an instance only if someone
specifies the security group. Well, launching the instance or associates the security group
with the instance later on, NSL automatically applies to all instances in the subnet
that it's associated with, therefore provides an
additional layer of defense if the security group
rules are too permissive. However, security groups and NSLs operate at different
layers in the VPC, which is the best way
to secure your network, security groups or NSLs in depth defense is about layers of security security groups, and NSLs are just
two of those layers. Therefore, a better
solution is to implement both to lock
down your network. That's all for this session.
29. Create VPC, public and Private Subnet: Hello students. Welcome back. In this demo, we will create
a VPC in our AWS account. We will also look at
the parameters and options that AWS offers
to create a VPC. We will also create an Internet gateway and configure public and
private submits. By the end of this lab, you will have a complete
understanding of how to create VPCs and make
private and public submits. Let's start by going
to the AWS console. First, we need to
select the region. I will choose
Mumbai as I want to create a VPC in the Asia
Pacific Mumbai region. Now to go to the VPC dashboard, we will search for the VPC in the search and click VPC
to open the dashboard. As you can see here
on the dashboard, we already have a VPC
created in our account. This is the default
VPC that AWS creates. This VPC has a routing cable and Internet gateway
in all the regions. We can use the default
VPC for demo purposes, but it is always recommended to create your own VPC
for development, test, and production environment for the most suitable
configuration. If you go to any region, you will find one VPC and some route tables and an Internet gateway
associated with it. Let us now create our own VPC. Click on VPCs and then
click on Create VPC. We will be giving
it the name Pa up. The next option for us
is to select if we want IPV four or IPV six
addresses in our VPC. We will opt for IPV
four and give 10.0 0.0 0.0 by 16 as the private
IP range for this VPC. Then we have the tenancy, which specifies where your
infrastructure is provision. If we select default tenancy, our C two instances will be on shared hardware with
other AWS users. If you want a VPC where all C two instances are
on dedicated hardware, you can choose to
create a dedicated VPC. The next option is tags. As you can see, AWS has already created
a name tag for us. We won't need any more tags. Let's click on Create VPC. We now have our own VPC
created in this region. Let's go back to the VPC page. You can see that now we have
two VPC's in this region. We will now create
an Internet gateway and attach it to our VPC. For that, click on
Internet Gateway. And as you can see, we already have
Internet Gateway. For the default VPC, Let's click Create
Internet Gateway and give it your
name pack up, IGW. Now click on Create
an Internet Gateway. Next, we have to attach this
Internet gateway to our VPC. We can do that
directly by clicking attached to VPC in the
top green bar here. Or by clicking the
actions button here. Click on Attach to VPC
and select your VPC. If you are using AWC LI, you can click on this scroll bar and attach it to your VPC. As we are not using AWS C, L, I, we can click here and attach this Internet
gateway directly. We now have a VPC with
an Internet gateway. But we still cannot create
infrastructure in the VPC. For that, we need
to have subnets. Let's create subnets. Click on subnets here based
on which region you are in. You will see 34 subnets here. These are the default
public subnets attached to the default VPC. Next, we will create
subnets for that, click on Create Subnets. First, we need to
choose the VPC. We are choosing the Pat
Cup VPC that we created. Now add a name to our Submit. Public Submit one. We will select the AP south
one, a availability zone, and give an IP range of 10.0
0.0 0.0 by 24 to the Submit. Let's skip tags for
now and click on. Create Subnet. We can now
see our submit in the list, we will create
three more subnets. Select the packed up VPC, give the name public
subnet to select P one availability zone and give an IP range of 10.0
0.1 0.0 by 24. Click on Create Subnet again
and repeat the process. This time we will
give the name Private subnet one Availability
one P one, and the IP range of
10.0 0.2 0.0 by 24. I am repeating this
one final time. Give the name private subnet
to availability zone P. So one IP range of
10.0 0.3 0.0 by 24. Please note that you cannot have overlapping IP
ranges in subnets. Now we have four subnets across two availability
zones in the Mumbai region. But what is the
difference between our private and public subnets? We created them using
the same method, just their names are different. As of now, all four of the subnets we
created are private. The difference between private
and public subnets is that the route tables of the public subnets have an
Internet gateway entry. If we see the details of the
route tables in our subnets, we only have one entry, 10.0 0.0 0.0 by 16 local. This simply means that
any requests going to routes within this range are
to be routed within the VPC. Let us go ahead and create a public route table
for our subnets. For that, click on
the route table, and as you can see, we already have two route
tables in our account. One is attached to the
default VPC of the region, and the other is attached
to the new VPC we created. If you see the default
route table of the default VPC in
the route section, you can see that there is an entry to the
Internet gateway. This means that all
the default subnets of the default VPC have this route which makes
them public submit. If you see the route
table of our VPC, we only have the local route which makes our subnets private. Now click on Create
Route Table and give it the name Pat
Cup, Arch Public. Next we select our VPC and
click on Create Route Table. We now have another
private route table. Now we need to add the Internet Gateway
path to the table. For that, click on Edit Routes
and click on Add Route. Select 0.0 0.0 0.0 by
zero as the destination. Choose the Internet
Gateway as the target. And select the pack
up, IGWcreated. Then click on Safe Changes. We have a new public
route table in our VPC. Let's now associate our route
table to public subnets. We can do that by clicking Actions, Edit Sublet
Associations, and selecting the public Submit 1.2 and then click on
Save Associations. Let us go to the subnet page and have a look at our subnets. You will see that the
public subnets are associated with the
new route table, which has the Internet
gateway entry. You can also do
that by clicking on this Edit Route Table
Association button to change route
tables for a subnet. Now we have created a
VPC with two public and two private subnets that we will be using
throughout this session. As I said, creating and keeping VPC Internet Gateway and Route
Tables is completely free. You do not need to
delete them, students. I hope you understood all that. What we did in this demo, that's all for this session.
30. DNS and Amazon Route 53: Hello, students. Welcome back. In this video, we
will learn about DNS and the DNS service of AWS, the Amazon Route 53. We will start with understanding DNS and Route 53 and then have a look at its features and use cases.
Let's get started. Dns stands for
Domain Name system. It is the phone book
of the Internet. It is a system that
lets you connect to websites by matching human
readable domain name, like Google.com or
Facebook.com Basically, it translates the domain
name to IP address. Now let's understand
this with an example. When you make a phone
call to your friend, you look for his phone number
in the contact app, right? But because it's hard to
remember a phone number, we save phone numbers
in the contact app. Similar to phone numbers, websites and applications
have IP addresses. Have you ever used
the IP address of Google.com or Youtube.com
to connect to them? No, because again, it
is hard to remember. This is simplified with DNS. As I said, DNS is similar
to our phone book, which stores the IP
of the websites. The DNS servers simply translate domain names
to IP addresses. When you access Youtube.com the request first goes
to a DNS server and then to the IP address of Youtube.com Now we understand
DNS and how it works. Aws also offers a DNS
service called Route 53. Route 53 does a similar task. Now let's understand
Route 53 in detail. Route 53, highly available and scalable DNS service from AWS. It is also the only service on AWS that claims
100% availability. It means that Route 53
service never goes down. Let us say you have
a web application in your AWS account running
on EC Two instance, if you want to host this website on the
yourname.com domain, you can create a DNS record in Route 53 to point the domain
to the IP address of EC two. Now there are a few
more features as well. The first is Domain
registration. Route 53 is also a
domain registrar. This means that it also allows you to buy new
domain names from the Route 53 console
itself if you purchased your domain name on another website
like Go Daddy. Route 53 also has a
feature to transfer domain names from other domain
registrars to Route 53. The next is hosted one we can create The Hosted
Jones Hosted Joe is a container for
records which include information about how to
route traffic for a domain. The next feature
is health check. Route 53 also has the ability to perform health
checks of your resources. It uses these health checks
to determine the state of your resources and send traffic to only
healthy resources. The last is traffic flow. There's also called
traffic flow which gives you another
level of automation for how you send your
traffic to your resources. That's all you need to know
about Amazon Route 53.
31. AWS VPN & AWS Direct Connect: Hello, students. Welcome back. In this session,
we will look into different connectivity
options from AWS to the on premises
data centers. Let's get started. Let's
understand with an example. Assume your organization
is planning to migrate a business critical
application to the AWS cloud that
has compliance data. Your security team
insists that you can move the application
to the cloud, but data must remain on premises and meet all
compliance requirements. Now you have decided on
the Mumbai region and created VPC inside
your AWS account. In order to deploy
your application, you created an EC two instance. Your application also uses other AWS services like Amazon, S Three, Cloudwatch, et cetera. This is your on
premise data center where you have a database. Finally, you configured
your application that was running on an EC to instance
on premise database. Your security team realized that this database connects with application via the
public Internet. They complain that it's
against security guidelines. This connection must
be private and secure. There are a couple
of options you can use in AWS to make this
connection private. One of them is site to site VPN. This is called site to
site VPN because we connect one AWS site to
another site data center. A VPN stands for Virtual
Private Network. Vpn creates a tunnel
that you use to access the Internet and bypass your Internet service
provider, that is ISP. This process encrypts all the
data that you send and uses different security measures to ensure that all data is secure. What we do is create a VPN
between your AWS account VPC, and your on premises network. There are a couple of components that we need to configure. On the VPC side, we have a virtual private
gateway deployed in AWS. Then we have the
Customer Gateway, which is deployed in the
corporate data center. Now that's a device in the corporate data center that you have configured
a certain way. But the actual customer
gateway is created within AWS. It's essentially a
configuration element that you create that points your virtual private
gateway to whatever your VPN device is in your
corporate data center. Once you have got those, you can establish a virtual
private network connection. This is an encrypted connection that's going over the Internet. You get the protection
of encryption, but you are still
using the Internet. Now again, you have a security audit and they complain this connection
is encrypted, but it is still going
through the Internet. As per security guidelines, this connection must be private. Now, there is another service. We have AWS Direct Connect. This is actually a private
connection between AWS and your data center or office AWS Direct
Connect is a high speed, low latency connection
that allows you to access public and private AWS cloud
services from your local, that is, on premises
infrastructure. The connection is enabled
via dedicated lines, bypasses the public
Internet to help reduce network unpredictability
and congestion. You connect one end of the
cable to your router and the other end to an AWS
direct connect router. Using this connection,
you can create virtual interfaces that are
directly connected to Amazon. Vpc's AWS direct connect
location provides access to AWS in the region
with which it is associated. This is using a private
connection between your data center and the AWS
direct connect location, and from there, a private
connection into AWS. Now, it is more expensive
than having a VPN. The key take away is that direct connect is
a private connection, which means that you get that consistent
network experience, whereas a VPN is public, and even though it's
encrypted, it's public. That's all for this session.
32. Compute Services in AWS: Hello and welcome back. Compute is another one of the core offerings
in cloud computing. And there are various cost effective and flexible
compute services available in AWS for
different requirements. In this lesson, we will
understand what exactly is compute and what are the different compute
services available in AWS. Let's take an example. Imagine you have
got your dream job as an architect at a
space technology company, where your job is to work the data science team
and process the data. To make some sense of it, the research team
of your company has collected huge amounts of image data that might lead to the discovery
of water on Mars. Now it is the job
of your team to process that image data and
come up with the findings. But when you started
processing the data, you realized that you don't have any free servers to do the work. What does it mean to
not have free servers? Well, we need powerful computers to process the huge data, and in this case,
the processing power that you need to process
the data is not available. This processing power
is known as compute or any program or application
that you want to run. You need some memory to
open the program and some CPU to process the commands that you
give inside the program. Hence, the combination
of memory and CPU is what we refer
to as compute here. It could also include
other resources like networking
storage, et cetera. Depending upon the type of
application you are running, cloud computing has made it very easy to get the compute
power whenever you want, Configure them based
on your requirement, and pay only for what you use. Aws has various compute services for different requirements, but it is important to understand which
compute resources best suits your workload and what are different compute options
available in AWS. First of all, let's understand
what AWS compute is. Aws offers an on demand
computing service for running cloud
based applications. These are applications
that can be deployed on a remote server and
accessed over the internet. Aws provides computing resources like instances and containers. Instances are nothing
but virtual machines. Aw also provides serverless
computing to run applications where you do not require infrastructure
setup or configuration. When you go for a serverless
compute option AWS, compute resources
are available on demand and can be created
with just a click of mouse. You will only pay for
the resources you use and only for as long
as you are using them. Aws compute resources are broadly classified
into three categories. Instances, which are virtual
machines containers, and serverless computing. In this module, we
will discuss them all. It will help you to decide which is the best compute service
for your requirement. If you want to learn
more about AWS compute, check the link provided
in the resources section. The document has
different examples and scenarios of where you might use different
compute services. Now we are aware
of what is compute and services that are
available in AWS. With that, let's end this video.
33. Virtual Machine: Hello and welcome back. A couple of times, so
far in this training, we have used the term
virtual machines. I hope you already
have an idea about it. But before we dive
deeper into the things, let's understand virtual
machines in detail. What is a virtual machine? A virtual machine is
a virtual environment that works like a computer
within a computer. Sounds complex, right?
Let's break it down. Basically. A virtual machine
is commonly known as VM. It is no different than any other physical system
like a laptop or smartphone. It has a CPU memory discs to store your files and a network to connect
to the Internet. Your laptop and smartphone sounds real because
they are physical, VMs can be thought of as virtual computers within
a physical computer. Now you must be thinking
about how it is possible to have a
virtual computer within a physical computer. Let's say you have a laptop. It has Windows
operating system on it. Now you feel like
learning Linux. For that, you need a
Linux operating system. What would you do? Would you buy a new laptop and install
Linux operating system? No. A simpler and more efficient
way to do is you can use the virtualization
technique to virtualize your existing laptop and install the Linux
operating system on it. Basically, you can use both operating systems
on the same laptop. Now, how do we use virtualization techniques to
create a virtual machine? And this animation here is
your physical computer. This physical computer can
be your personal computer, a remote server, or a server. In a cloud providers
data center, we use software
called Hypervisor, which is also known as
Virtual Machine Manager. In your case, you can use
Oracle VM Virtual Box. The Hypervisor will create
a software based or virtual version of
a computer with desired amount of CPU
memory and storage. The virtual machine
always borrows CPU memory and from the
physical host computer. That means a part of
your computer's CPU, Ram and storage will now work
as a standalone computer. To understand it better, assume you have a physical
computer with eight GB Ram, two CPUs, and a 40 GB hard disk. You can use a virtualization
tool to create two or more virtual
machines and distribute CPU memory and hard disk between these two
virtual machines. Here we have two
virtual machines, VM one and VM two. They have four GB Ram one CPU, and 20 GB hard disk. Each and more important, two different operating systems. As you can see, a virtual
machine acts like an actual computer and it has a different
operating system. Despite using the same hardware, the virtual machine is completely isolated from
the rest of the system. The software insider VM, cannot interact with
the host computer or other virtual machine on
the same physical computer. The virtual machines run like individual computers with
separate operating systems. They remain completely
independent from another virtual machine and
the physical host machine. As VMs are independent
of each other, they are also highly portable. That means you can
instantly move a VM on a hypervisor to
another Hypervisor on a completely
different machine. Virtual machines are
flexible and portable. They offer many benefits
such as cost saving. You can run multiple
virtual machines on a single physical host. This can drastically reduce your physical
infrastructure cost, as you will have to buy
fewer physical computers. Agility and speed. Spinning up a virtual machine is a relatively easy
and quick task. It is much simpler
than provisioning an entirely new physical
machine for your developers. That's how virtualization
makes the process of running Dave test
scenarios a lot quicker. Next is lowered down time. Virtual machines are so portable and easy to move from
one machine to another. This means that they are
a great solution for backup if the host machines
goes down unexpectedly. Scalability virtual machines
allow you to easily scale your applications as creating a virtual machine is easier than setting up
a physical machine. You can simply add
virtual servers and distribute the workload
across multiple VMs. This is how you can increase the availability and
performance of your apps. These are all the benefits
of virtual machines.
34. EC2 Elastic Compute Cloud: Hello and welcome back. In this lesson, we are
going to start with the most common compute
service that AWS has to offer. It is the Elastic Compute Cloud, commonly known as EC
two. Let's get started. Ec two stands for
Elastic Compute Cloud. It provides a scalable
computing option in the Amazon cloud. But what is compute capacity? Well, we have already learned that it is nothing
but virtual machines, and EC two is a compute
service in AWS. C two is scalable. That means you can
increase or decrease the size of a virtual machine
based on your requirements. If that doesn't work, you can create additional
virtual machines as well. By using Amazon EC Two. You can completely avoid
setting up the hardware to run your application and can develop and deploy
applications faster. Another benefit of using an EC Two is that you do not need
to invest in hardware. You can have virtual machines of desired capacity that can be accessed using your
normal computers. Ec Two is an IAAS
infrastructure as a service because you rent
a virtual machine from AWS, and AWS will take care of purchasing and
maintenance of hardware. You can get the desired capacity without having to worry
about the hardware. It is designed to make web scale cloud computing easier for developers
to understand this. Let's go back to the times when cloud computing wasn't there
to host an application, you had to set up a data center or rent a physical server. That's entirely possible
for big organizations. But think about mid
sized companies, start ups or even individual
developers like you and me, They cannot afford to get the
data centers and hardware. But after cloud computing, it became significantly
easier for the developers to go and
get virtual machines all. Thanks to cloud computing, you can use Amazon
C two to launch as many or as few virtual
machines as you need. There is no limitation. If you want one virtual
machine, you can get one. If you need 1,000
virtual machines, you can get 1,000
virtual machines. Amazon C. Two makes
it easier to scale up or down the virtual machines to handle unexpected loads. You don't need to
forecast traffic, you can scale up and scale
down virtual machines based on traffic and the
popularity of your applications. Amazon C two has an autoscaling feature that
we will understand later. We can configure auto scaling
to increase or decrease the number of virtual machines based on the load on
your application. It provides you with
complete control of your computing
resources and lets you run on Amazon's proven
computing environment. As we know by now, EC two virtual machines are located in the
Amazon Data Center. Since security is very important
for your applications, AWS gives you
complete control of these servers so that you can secure your application
as you wish. There are other ways
as well to secure those servers that we will
cover in later videos. It supports Macos. Recently, Amazon Web
Services have also started supporting
Mac operating system based virtual machines. If you need Mac OS
based virtual machines, you can also get them
on AWS students. These are some of the major
benefits of Amazon EC two.
35. Component of Amazon EC2: Welcome back students.
In the last video, we understood elastic
compute cloud C two and its benefits. We will continue with
the same discussion and understand the
elements of Amazon C two. But before we start, let's recap the
definition of EC two. It is a virtual machine
in Amazon Web Services, but AWS does not call
them virtual machines. Instead, it uses
some specific terms for the EC 21 of these terms is the instance which is nothing but an Amazon
C two virtual machine. When we launch a virtual
machine in Amazon, we say that we launched an C
two instance going forward. Instead of calling it an
E two virtual machine, we will call it an
EC two instance. Another important element
in Amazon C two is AM I. It stands for Amazon
Machine Image, which is basically a
template that contains an operating system and additional software needed
to launch EC two instances. Let's make it simple.
C two instance is a virtual machine, right? And a virtual machine, it's similar to your laptop. Let's assume you
bought a new laptop. You need to install
the operating system and some general
software like drivers, browsers, media players,
et cetera, right? Because without these operating
systems and software, your laptop is just a box. The same is the case with
EC two instance as well. It is also like your computer
and that's why it requires an operating system and additional software in
order to use it in AWS. You can do so by using AMI. Once you launch an
EC two instance, you would want to access
it so that you can install your application
or do some configuration. To access your EC two instance, you need some credentials
and those credentials are nothing but a key pay that we are
discussing right now. A key payer consists of a
public key and a private key. It is a set of security
credentials that you use to prove your identity when
connecting to an Amazon EC. Two instance, when you
launch an instance, AWS asks for a keypayer. You can choose an existing
Keypair or create a new one. When we create a Keypair, AWS lets you download the
private key and it will keep the public key When you
launch an EC two instance, by default, Amazon stores the public key on your instance. Basically, you will use your private key to log
into your instance. We will see this when
we do the demo of EC. Two important thing
to note here is that since you use a private key to log into your EC two instances, hence anyone who has this private key can also
connect to your instances. Make sure that you store your private key
at a secure place. Another essential elements of EC two is the Security Group. This is another
service that you can use to further secure
your instances. Security group lets you control who can access
your EC two instances. It acts as a virtual
firewall for your EC two instances which controls the inbound
and outbound traffic. Inbound means you
can configure who can connect to your EC two
instance from outside, and outbound means what your two instance
can connect with. Another element of
EC two is tags. A tag is a label of
your AWS resources. It can be assigned
by you or AWS. And each tag consists
of a key and a value. When you create an
EC two instance, you can tag your virtual
machines with different names. It could be the environment, cost center owners,
or application name. These tags will
help you identify your servers when you have thousands of servers
running in your account. Each tag key must be
unique for each resource. And each tag key can
have only one value. You can use tags to
organize your resources in your account and to track your AWS costs by using
cost allocation tags. Let's understand cost
allocation tags. Some organizations used to
have a cost center when you will be running
multiple applications in one AWS account. You can tag EC two instances with cost
center, key and value, which will help you
track the cost per cost center So far these are the different terms and
features you need to know and understand
regarding Amazon EC two. Let's conclude this lesson here.
36. EC2 Instance Naming Convention: Hello students. Welcome back. In the last session, we understood the different
elements of an EC two instance like
Amazon Machine Image or AMI security Group, key payer and tax. There is one more important
feature of the EC two instance left that
we need to understand. It is the type of
EC two instance. But before that we will understand the C two
instance naming convention. It will teach you how to read the names of
EC two instances. In this video, you will first learn the EC two instance
naming convention. You will not find a
clear explanation about it over the Internet, not even in AWS documentation. I request you to pay special attention
to this video as it will also help you a lot during your journey
with AWS Cloud. Let's get started.
Aws instance name looks as shown on the
screen M 6.2 large. As you can see, the name
contains characters and numbers. The complete name does not
really have a meaning. We will break it down and
try to make sense of it. One thing is clear, if we
put everything together, this is an instance type. If you look at an instance type, it will be something like this. Now let's break it down. The name of this
instance type is made up of four components, instance, family, instance generation, additional capability,
and instance side. You can pause the
video here or take a picture or a screenshot
of it for future reference. Let's go over them one by one. The first letter of
the instance type represents the EC
two instance family. Here, the M represents
the instance family. This AWC two instance belongs to the general purpose
computing instance family. Type AWS has four main
categories of instance types. These are general purpose
compute optimized, memory optimized and
storage optimized. We are going to cover them in
detail in the next lesson. There are also
subclassifications within these four
instance family, but that is not
essential at this level, we will skip them next. The number six in M six represents the generation
of the AWS C two instance. The latest or the
current generation of AWS C two instances
is always better. It is cheaper also than
the previous generation. That's why AW always recommends using the newest
generation instance type. You can relate it to a car. The company always launches new car models with some
new additional features. It recommends the
latest models to its customers as it includes new features
for a better price. Then we have in the same which represents the
additional capability of the EC two instance. This table has all
these capabilities. Amd and Graviton two are the two processes inside
these EC two instances. If the processor is also
in your requirements, you can select a processor
based C two instance as well. Aws also offers local
NVME SSD storage, which is directly attached with EC two to get better
network performance. Hence, high networking and extra capacity are
the two capabilities of this instance type. The final and most
important component of this instance type
is instance size. The two x large of M 6.2 x large represents
the instance size. Let's understand what
the two x large means. The two x large denotes the shirt size representation
of the AWS C two instance. It represents the
amount of CPU memory, storage, and network performance
of an C two instance. This instance that two x large will have
twice the number of CPU memory and storage resources as compared to the base
size that is X large. How does it work?
Let's understand this. With instance sizing each
instance size two x large, we'll have twice the number of CPU and memory than
the previous size. That is large, 26x large equals 162x large. Similarly, 262 xlarge
equals 154x large. Now let's look at it from
the capacity perspective. If you go to AWS
documentation and navigate to the
six instance type, you will see this image. X large has twice the number of CPU and the memory
compared to large. The on demand R
cost is also twice. Now you must be thinking, should I go for
12x large instance or two x large instances? For that, let's look at the price and capacity
of each instance. As you can see, there is no
difference between them. Whether you select two
x large or 12x large, you will end up getting the same capacity
for an equal cost. But it is always better to use a smaller instance
size instead of a bigger one unless you
have specific needs. There are far too many
stance classifications of AWS, EC two. We will cover the essential
ones in the next video, although this is not well
documented anywhere, AWS C two has an excellent
naming convention. I hope this video helped
you understand this, in case you have any confusion, I will recommend
watching it again. That's all for this lesson.
37. Instance Types: Hello, students. Welcome back. In the last lesson, we learned the instance
type naming convention. You learned about EC
to instance size, family generation, and
additional capabilities. All these will help
you while deciding the EC to instance type
for your application. In this video, we
will dig a little deeper and understand different
EC two instance types. Once you successfully
complete this lesson, you will be able to decide which instance type best
suits your requirements. Let's start with a quick recap of what the EC two
instance type is. It is a combination of CPU memory storage and
networking capacity. Having different
combinations helps while choosing the appropriate
mix for your applications. When you launch, the C two instance instance type is one of the mandatory
elements you need to decide but why it
is so important. Let's understand
with an example. When you buy a laptop, you first define
your requirements. And based on these requirements, you select a configuration
for your laptop, like memory, storage,
graphics, et cetera. The same goes for the EC
two instance as well. When you launch an C
two instance in Amazon, you need to provide
CPU memory storage and networking configurations
to make it easy for you. Aws has instance types with different int in terms of CPU memory, storage,
and networking. Each instance type has multiple size options
for different workloads. We have already
covered instance size. In the last lesson,
you should launch instance types that are the best fit for
your applications. Now understand what we
mean by best fit here. Each application is
designed differently and they require a specific
hardware configuration. Some need more memory, others need more CPU and so on. Therefore, you should use the appropriate compute
capacity for your application. It is easy to select the best configuration for
your applications in AWS. As Amazon, instance
types are grouped into families to meet
different use cases, we are going to look at four basic and essential
EC to instance types. They include general
purpose instance, compute optimized
instance memory optimized instance and
storage optimized instance. Let's start with the general
purpose instance type. It provides a balance
of compute memory and networking resources and can be used for a variety
of workloads. As its name suggests, this is the most basic
and all rounder AWS, EC to instance type, we use general purpose instances where we are not sure
whether the application needs more CPU or memory because this comes with a balance of memory, compute and networking. If we find out later
that this application is more memory optimized
or compute optimized, then we can go for specific
types of instances. Now let's have a look at the best use cases of
general purpose instances. They are suitable for web
servers caching fleets, and distributed data
store applications. As these applications require a good balance of memory, CPU, and networking, they
are also suitable for development and demo
environment applications. The next instance type is
compute optimized instance. They are ideal for compute
bound applications that benefit from high
performance processes. This instance type is
optimized for CPU in comparison to other
compute powers like memory storage
and networking. Therefore, the instance
type which has more powerful CPUs comes
inside this category. Let's talk about
their use cases. This is best for applications that need high compute power. The compute optimized
instances are well suited for batch
processing media, transcoding, high
performance wave servers, high performance computing, scientific modeling,
dedicated gaming servers, add server engines, machine
learning inference, and other compute
intensive applications. Next is memory
optimized instances. They are designed to deliver
fast performance for work loads that require huge memory to process
large datasets. These instances are optimized on memory here when we go and look at instance types in the memory optimized category and memory optimized family. More memory than storage,
CPU and networking. In other words, memory
optimized instance types are optimized for memory
over other features. Let's look at their use cases. Memory optimized
instances are used in applications such as
open source databases, in memory caches and real
time big data analytics. The next is storage
optimized instances. Storage optimized instances are designed for work
loads that require high sequencial read and write access to massive datasets
and local storage. They are optimized to
deliver tens of thousands of low latency random
input output operations per second to applications. This instance type is optimized at the network side of
the EC two instance. It supports more read and
write actions to the disk. When our application
does a lot of reading and writing to the
disc database or network, we should use a storage
optimized instance type. This instance type maximizes the number of
transactions processed per second or input output intensive and business
critical workloads. Transactions per second means the number of transactions the network can
process in a second. If the EC two instance
is able to complete ten transactions to and from
storage without any delay, you can say that
the instance can process a max of ten
transactions per second. Let's look at its use cases. The application has medium
sized datasets that need high compute performance and high network throughprint. These include
relational databases including Mysql, Mariadb, Post Gray Sql, and
NOSQL databases, including Ke, DB,
Ilab, and Cassandra. This instance type
is also ideal for workloads that require
very fast access to medium sized data sets
on local storage such as search engines and
data analytics workloads. It's so much information. Let me summarize it,
it will help you to make the best decision
on the instance type. The most important thing is to understand these four
main classifications. Once you understand these, it will serve your
most common use cases. Also, while selecting any instance type for
your application, ask yourself whether
your application does memory heavy work or
it's compute heavy work. Does it need high work? Or it's just the generic
work it will do? Once you have an understanding of the main classification, the subclassification
can come naturally. Like in a memory
optimized instance, you may need higher Ram or
higher optimized memory. At last, once you know
the instance type, you should always choose the
smaller size instance to run experimentation based
on the experiment and test, decide on the actual instance
size as a thumb rule. Always choose a smaller instance
size which saves money. That's all for this lesson.
38. Launch Your First EC2 Instance: Hello students. Welcome back. In this video, we will launch our first EC two instance
via the AWS console. We will also look at the
different steps and features AWS offers while launching
the EC two instances. By the end of this video, you will be able to launch EC two instances through
the management console. Watch the video till the end
and then try it yourself. Let's go to the AWS
management console type EC two in the search
bar and click on the EC two search result. Go directly into the
EC Two dashboard. As you can see, I have no
instances in my account. If we click on instances here, it will take you to
the instance page. This page contains the list of all instances in this region. Since we don't have
any instances, the list is empty. Let's launch our first
EC two instance. First thing, please
make sure you choose a region that is close
to you or your customer. Since I am in India, I will select the Mumbai region. Let's click on Launch Instances. First, we have to choose an AMI called Amazon
Machine Image. Here we have a list of
AMIs to choose from. It has Amazon, Linux, two, Max OS, Red Hat, and many more. If you look at the
dashboard on your left, you will see other options. Let's go to my AMI's here, you will see the list of
your own custom made AMI's. As of now, we do not have any AMI's created
in our account. We cannot see any
AMI's listed here. The next option is
AWS Marketplace, where we have thousands of AMI's created by
third party vendors. Then we have Community AMI's, which are developed
by AWS and its users. They are free for
everyone to use. Please note that you should always be careful
when using AMI's created by people other than
AWS or trusted AWS vendors. There could be security
holes in them. There is also a filter
to show free tire only. This will show us only
the free tire eligible AMI's if we check it for
the purpose of this demo, we will create a Linux to
instance with Amazon Linux two. Ami. This AMI is
free tire eligible, which means that
we will be able to launch the instance without
spending any money on this. Select Amazon, Linux two, it will take us to step two. Choose an instance type. Now as you can see, we have many instance
types available. Just scroll down and you will see a long
list of instances. We have an option to filter these instances based on
their family and generation. We have varieties of instances based on
different requirements. The list includes instances
with Ram optimized, CPU optimized, GPU
optimized, and so on. In this demo, we will use Two point Micro
because we don't have any special requirements and we also want to
stay in the free tire. As you can see, two point Micro is the only instance
available for free. In some regions you will see
three point Micro as well. If you are just practicing, make sure that you always choose the free tire eligible instance
type to avoid any code. We can now go ahead
to review and launch our instance immediately
just by clicking here. But before doing so, let's go to configure
instance details. This will open up a
whole lot of options. Let's explore some of the important
parameters one by one. The first option is the
number of instances. If you require more
than one instance, you can change this and create multiple C two instances
with the same configuration. We can also launch an C two
instance in launch into Autoscaling group by clicking the link Launch into
Autoscaling Group. For now, let's stick
to one instance. This is something we will
cover in the next module. Next we have the network
option to choose our VPC's. Let's choose the Pat Cup VPC that we created in
the previous lab. Select the subnet where we
want to launch the instance. Since we want our instance to be accessible
from the Internet, we should choose
a public subnet. The next option
asks us if we want AWS to assign a public
IP to this instance. Let's leave this
enabled for now. The next important
parameter is the role. Role is used to give permissions as to who can access
this EC two instance. Let's select the role we
created in our first lab. In case you want to
create a new role, you can navigate to IAM by clicking on the link
Create new IAM role, and then create a role. Click on Next to go to the
next page, Add Storage. As you can see, we already have one EBS volume selected for us. This is the root
volume that holds the operating system
and other software. For this instance, let's add a new storage
to our instance. Click on Add a new Volume, specify the parameters
of our volume. Under the device column, we can see the SDB path. It is the path where our
volume will be attached. Let's leave it as default
for now and move on. In the snapshot option, we can choose to
create a new EBS from a snapshot that
we created before. Since we have no
such requirements, we will leave it empty for now. Next is the volume side. Let's change our
volume to ten GBS. The next column is the volume. We have all SSD, HDDs, and magnetic
storage to choose from. If we look at the volume
type for root volume, you will see that we have
lesser options here. This is because only SSDs
can be a root volume. For an EC two instance, we do not have the HDD
option for the root volume. Coming back to our new volume, let's leave this as GP two, which is the general
purpose SSD. The next option is IOPS. As you can see, we have
no option to change this. Let's go back and
change our volume type. If we choose IO 102 or GP three, we have the option
to provide IOPS. This is because IOPs
are fixed for GP two, volume at three IOPS per
GB. Let's understand this. If we select GP two, again, you can see the IOPS 100, which is more than
three IOPS per GB. Since our volume size is ten GB, three times ten means we
should only have 30 IOPs. This is because we have a
minimum of 100 IOPs for GP two. If we change the
volume side to, say, 100 GBS, you can see
that we have 300 IOPS. Now let's change the side back to ten and choose the
GP two volume type. The next option is throughput, which will tell us the
throughput of our volume. I will cover throughput
in the EBS screen. We can change this for
only one volume type, the GP three volume type. The next option is the delete
on termination check box. This is the option where
we specify what to do with the EBS volume after the EC
two instance is terminated. If we have delete on
termination checked, the volume will also be deleted when our instance
gets terminated. Please remember if
you want your data to remain even after
termination on instance, don't forget to uncheck
this for this demo. Let's leave it as default, which means that when
EC two is terminated, the root volume will
also be deleted. Next page is add tags. Let's skip this for now and
move on to the next page. Configure security group. The first option is
to either create a new security group or
use an existing one. Since we have not created any security group
in our account, let us create one here. You can change the
security group name and description as you want. The next and the most
important option is to configure
security group rules. As you can see, AWS
has already added a default S that is secure
shell security group. This group allows us to SSH
into our EC two instance. But why do we still
have a warning? That's because if you
see the source column, this SH rule is open to all IP addresses,
this is an issue. Let's change this value
to something like 123.123 0.123 0.123 And the
warning disappeared. Since we have no mission
critical systems here, let us change it back
to 0.0 0.0 0.0 by zero. It is not the best
practice to allow SSH from any IP in the world. If you have production
EC two instances, make sure to change this value to your
own IP address range. Let us add a new rule to allow HTTP connections
to our instance. As you can see, choosing HTTP fixed our protocols
to TCP and port 28, which is the default
HTTP protocol and port. We want everyone to be able to make HTTP connections
to this instance. We will leave source
as default here. The last column is to add
a description to the rule. Let's not add any rule here. Now that we have configured the security group of
the C two instance, let us move on to
review and launch. You can see all the parameters
for our C two instance. Here, click on Launch. Now we need to
specify a Keypair, since we don't have any existing
Keypair for our account, Let us create one
choose to create a new Keypair and add a
Keypair name for our example. We will name the
Keypair packed up. Click on the download keeper, you will see that a Pm
file is downloaded. It is important to
keep this file safe as this will be needed to
access our EC two instance. Also, remember it cannot
be downloaded again. Click on Launch Instance, and that will take us
to the final page. If you click on
the View instance, you will see AWS is creating
an EC two instance for you. The whole advantage of the
Cloud is that I could launch 100 instances like this
in just a few clicks. Cloud computing is very
flexible and provides you quick access to computing
power whenever you need it. As you can see, the
instance is running. Now if you click
on the check box, you will find all the
instance details, the public IP security groups, health checks and so on. Whenever you need any details
about EC two instances, this is where you
will look for them. If you click on the
Volumes Options from the left dashboard, you can also see the two EBS
volumes that we created, An eight GB root volume and a
ten GB extra volume that we created separately with the have come to the
end of this lab. In the next lab, we will
install HTTPD on our EC two instance and see a
small HTTPD website. Students, that's how we
launch EC two instances. In the next lesson, we will continue this
discussion and see how we can assess H into our
instance and so on. Thanks.
39. SSH EC2 Instance, Install Httpd: In the previous video, we created our first EC
two instance and looked at multiple parameters and options while creating
the instance. In this video, we will SSH into the C two instance using Linux
or Mac operating system, install the STTPD service and start a test
application on C two. We will also see
how to terminate the instance to avoid
any unexpected costs. If you are wondering
what SSH is, then SSH is a way to control a remote machine
using the command line. By using SSH, we can log
into the EC two instance and perform any kind of operations as we
do on our laptops. Let's start by going
to the AWS console. Now, let's type EC two
in the search bar and click EC two to go directly
into the EC two dashboard. As you can see, we have our instance in
the running state. If you remember
the previous lab, I told you to preserve the Pm file that was downloaded while
creating the instance. Not that without the Pm file we cannot SSH into the instance. In case you have lost
or deleted the file, you will need to recreate the instance with
a new key payer. The next important
thing that you should ensure is that your
security group has rules to allow SSH on port
22 and STTP on port 80. Let us find the public
IP of this instance. Click on the checkbox here, now you can see all the details about the selected
EC two instance. Here is my public IP. Let us copy this value. Now let us go ahead and
SSH into our instance. For this you need to open a terminal and type
in the command S C two as user at
the ate IP address. C two user is the default username for all the
Amazon Linux, AMI's. Let us run this command. As you can see, the command failed with permission denied. This is expected as we
don't want anyone to be able to access our EC two
instance just from a public IP. This is where we will use
the Pm file we downloaded. We will go to the folder where
I have stored the file is. As you can see, I have my
pm file in this folder. Now we will run
the same command, but we will also
add the key file. Now the command is C two
user at the IP address. Now we are doing the
same thing as before, but here we are also
telling AWS that we have a key file to
into the instance. Click Enter. Now we get another
permission denied error. This is an important stage and a popular question in AWS exams. Why do we get this
access denied? It says warning Unprotected
private key file. When we first download the file from AWS to
a Linux instance, it has the permissions 0644. As you can see here, these permissions
are too open for a key file since permissions
are inappropriate. Aws does not allow you
to SSH into the machine. The fix you can run is mod
0400 packed up. Do pay. This will change the
permissions of the file, 0644-0400 which is the
appropriate permission. Let us run the same command. Again, as you can see, the command now ran successfully and we are
in the EC two instance. The first thing to do when we log into the EC two instance is run the command
pseudo, um, update. This command will update any outdated packages
in the instance, if there are any
to install STTPD, run the command pseudo
install STTPDaSY. Now we have STTPD installed
on our EC to machine, but we also need to start
the STTTPD service. To start the STTPD service, we need to run another command, pseudo system CTL. Start STTPD. This command has no outputs. Let us check the
status of STT PD by running pseudo system
CTL status STTPD. As you can see, we have
a process running. Let's now go to the
EC two dashboard and select the EC two instance. We need to copy the
public IP address and paste it into our browser. You can now see the
default STTPD application deployed on your
EC two instance. Let us now terminate
the EC two instance. You can do this by
clicking on the instance. Go to Accents, click on
Manage Instance State, and terminate the instance. This will terminate
your EC two instance. If you remember, we attached an EBS volume to our instance in the previous lab which had the delete on termination
flag unshaped. This means that even though our instance is
being terminated, one of the EBS
instances will remain. Let us have a look in
the Volumes dashboard. You can see both the root
and the additional volume. If you refresh in some
time, the root volume, which was eight GB
disappears from the list, but we still have the ten
GB volume in the dashboard. Let us go ahead and delete
the EBS volume as well. Let us now check
the security group. As you can see, even
the security group we created has also
not been deleted. Let us not delete it. Security groups are
free of charge. There is no harm in having a security group handy
for our upcoming labs. Similarly, the Keypair
packed up has not been deleted and you can keep it in your account
without any charges. In this session, we were
able to successfully assess H into an EC two
instance install STTPD, and access it from a browser. That's all for this session.
40. Creating a Windows EC2 Instance: In the last two videos, we have launched Linux C
two instance installed STTPD and accessed it
from our web browser. In this video, we will walk
you through the steps to launch a Windows C two
instance and how to access it. This is going to
be quite similar to the Linux C two instance, hands on demo, only a couple of configuration will be
different. Let's get started. Let's go to the AWS management
console type C two in the search bar and
click on the EC two search result to go directly
into the EC two dashboard. As always, it's
important to make sure you choose a region close
to you or your customer. I will select the Mumbai region. Let's click on Launch Instances. First, we have to choose an AMI called Amazon
Machine Image. This time we are going
to select Windows AMI. Let's type Windows in the
Search option and press Enter. We can see Windows AMI here, and this AMI is
free tire eligible, which means that
we will be able to launch it while not
spending any money on this. Let's select this and
it will take step two. Choose an instance type. The AMI is only the
difference between launching Linux and Windows
based EC two instances. The rest of the
configuration will be the same or you can change
other things if you want. Now as you can see we have
many instance types available. Again, we will use two point
micro because we do not have any special requirements and we also want to
stay in the free tire. Let's click Configure
Instance Details. As you know here, we will provide all instance
related configurations. Let's go with the default
configuration and just double check if auto
assigned public IP is enabled. As we need public IP to
access the EC two instance, click on Next to go to the
next page, Add Storage. As you can see, we already have one EBS volume selected for us. This is the root volume that holds the operating
system for this instance. The next page is Add tags. Let's skip this for now and
move on to the next page. Configure Security Group. Let's change the
security group name and description as you want. You see we have RDS
type instead of SSH port range is
3398 in place of 22. This is because Linux
and Windows use different technology to
establish a remote connection. We will not have to do
anything else here, let us move on to
review and launch. You can see all the parameters
for our EC two instance. Here, click on Launch. Now we need to
specify a key pair. I will use the same key
I created last time. I will accept that,
I have the key. Click on Launch Instance, and that will take us
to the final page. If you click on the instance, you will see AWS is creating
an EC to instance for you. The instance state is pending, which means that the
instance is creating. It will take a couple of minutes to change state to running. As you can see, the instance is running now and if you
click on the checkbox, you will find all the
instance details, the public IP security groups, health checks, and so on. Whenever you need any details
about EC two instances, this is where you
will look for them. Let's select the
EC two instance. Here on top you click Connect. Now here we have three options to connect with this
EC two instance. I will select the
second tab, RDP client. Here you can download
the RDS client that we will use to
RDS to the instance. Now click on Get Password. You either copy and paste them key content here or upload
it by clicking the browser. I will upload and click
Descript password. Now this is the password I will use to connect to
the C two instance. Copy the password from here. Now let's open the client you downloaded based on your laptop or computer
operating system. It will have a
different interface. I am using a Mac. I will post the password here. It has already selected a user
for me and will continue. I will again click on Continue. Now here I am in my EC
two Windows machine. That is pretty much what you do to launch a Windows instance. With this, we have come
to the end of this lab. That's all for this session.
41. ECS Elastic Container Service: Hello, students. Welcome back. In this video, we will learn about one of the compute
services of AWS, which is Elastic
Container Service, also known as ECS. We will understand what ECS is, its types, and most importantly, its use cases.
Let's get started. Before we understand
elastic Container service, let us quickly understand
what a container is. Because ECS is developed
on container technology. When we develop applications, they work on our computer. But they break as soon as we move these applications
to another machine. This can be due to
multiple reasons. It may be the different
operating system, a different version of
dependency, et cetera. How do we resolve this issue? Well, containers are the
solution to this issue. Application containers are stand units that hold application
code and dependencies, configuration, networking, and sometimes even a part
of the operating system. Essentially, all the
software required to run your application is
put inside a container. Containers make it
possible for you to run your applications
on any machine. You can rest assured that
if it works in one system, it will work in all the systems. Docker is the most widely
used software platform that allows its users to create
applications and containers. You can install Docker onto your machine and make a
container for any application. We don't need to discuss Docker
in detail at this point. Let's move on and
understand what ECS is. Amazon ECS is a fully managed container
orchestration service that makes it easy
for you to deploy, manage, and scale
containerized applications. It means ECS makes it easier to use and manage
Docker containers. When you use ECS, you do not need to install, operate, or scale
your containers. Ecs takes care of all
these tasks for you. Deeply integrates
with the rest of the AW platform to
provide a secure, easy to use solution for running container
workloads in the cloud. To understand ECS,
you should know some terms frequently used
in ECS task definition. Task definition is
a Jason script that holds multiple parameters
to run your application. In simple words, task definition tells ECS how to
run a container. For example, you can define
exactly how much Ram and CPU a container will need or on which EC port should
your application start. Ecs will then ensure that
all your requirements are met when ECS runs a container. Based on your task definition, it is known as task task role. Your containers can be running multiple operations on AWS. For example, a container might
need to read messages from an SQSQ or interact with S three or any
other AWS service. Therefore, you need to give
permission to do your tasks. This is done by
using the task role. A task role is an AWS IAM role that is defined in
the task definition. This role is used to provide
AWS access to ECS tasks. Now let's look at how this
all looks like in ECS. You need an ECS cluster to run your container
application. This ECS cluster is made up of two or more EC two instances
called container instances. Then we have services that span across these
available two instances. That's where ECS create
task or docker container. One of the first thing
to do when using ECS is to pro vision and
maintain infrastructure. Ecs then takes care of starting and stopping
containers for you. You can do this in
two ways by using ECS launch types that are far get launch type
and EC two launch type. In C two launch type, we configure and deploy EC two instances in your
cluster to run your containers. Ecs manages the containers
on these EC two instances. To use the EC two
launch type in ECS, you have to create
EC two instances manually or use
autoscaling groups. Farget launch type, this is a serverless pay
as you go option. You can run containers without needing to manage
your infrastructure. We can use the
Farget launch type and forget all about
creating EC two. We tell ECS the number
of containers to run and how much Ram and CPU
should a container have. Ecs Farget ensures that we always have enough resources
to launch containers. That's all for this session.
42. AWS Elastic Beanstalk: Hello, students. Welcome back. In this video, we will
learn about one of the compute services of
AWS, the Elastic Beanstock. I will cover what
Elastic Beanstock is, its features, and
most importantly, its applications.
Let's get started. Aws is a cloud computing leader with close to 200 services. That's why one should know
about most of the services. To deploy applications on AWS and manage them efficiently. You have to first know about multiple AWS services and how to use them in a cost
effectively manner. This is fine when you have a multiple applications
or one big application. But what if you only
have a web application? Spending days
learning AWS only to deploy a web application is not a very
efficient approach. Even after you learn AWS, you have to go
through the effort of maintaining your
infrastructure on AWS. As a developer,
you don't want to worry about infrastructure
scalability, configuring connections,
databases, and more. You quickly want to deploy
and rest applications. This is the problem that
Elastic Beanstock solves AWS. Elastic Beanstock
is an easy to use service for deploying and
scaling web applications. It's also known as B. This is a platform as a
service offered by AWS. To use elastic beanstock, you create an application, upload an application version in the form of an application
source bundle, for example, a Java file
to elastic beanstock, and then provide some information
about the application. Elastic Beanstock automatically
launches an invonment and creates and configures
the AWS resources needed to run your code. After your involvement
is launched, you can then manage your environment and deploy
new application versions. After you create and
deploy your application, information about
the application, including metrics, events,
and environment status, is available through
the Elastic Beanstock console API's or command
line interfaces, including the unified AWS CLI. As a developer, if you use EB, you can deploy applications without provisioning the
underlying infrastructure. Aws will ensure that your infrastructure is
ready and highly available. Internally, EB Re uses all the components we have seen before like RDS, load balancers, EC two to run applications, but you can use EB without any other knowledge
of these internal components. It is a free service on its own, but you pay for the
underlying infrastructure provisioned by using beanstock. Now let's have a look at how elastic Beanstock works in AWS. We have an AWS account
and VPC within it. Here we will create an elastic
Beanstock environment. As we know, Beanstock will create all underlying
infrastructure like EC, two instances, load balances, databases, security
group, et cetera. As a developer, you only
care about your code. They want to make sure that they spend their time
writing great code, testing it, and have all that underlying
infrastructure managed for them. In this case, we have a developer who has a war
file with their code. It really does not
need to be a war file. It could be Zip or
Github code as well. He can deploy this
war file using the EB management
console or EVCLI. Beanstock will then deploy the application and do the
rest of the configuration, like the elastic load balancing, the auto scaling group, the instances, and even
a database, that's all. Now application scaling,
OS upgrade, patching, logging metrics, and everything is taken care of by
Elastic Beanstock. As a developer, you can focus on your application
and business features. Now let's have a look at
Elastic Beanstock features. It supports different
programming languages like Java.net PHP, Node JS, Python,
Ruby, Go, and Docker. It integrates with
Amazon VPC and launches AWS resources such as EC
two instances into the VPC. It integrates with AWS
identity and access management and helps you securely control access to
your AWS resources. You can also
integrate cloudfront. Cloudfront can be used to
distribute the content in S three after an elastic beanstock
is created and deployed. It supports running
RDS instances in the elastic
beanstock environment, which is ideal for
development and testing. Elastic beanstock is an
excellent choice for you to deploy your application to
AWS cloud within minutes. You do not need any
experience and knowledge of cloud computing to get started
with Elastic Beanstock. You create EV
application environments and specify some parameters
and you are done. That's all for this video.
43. Serverless Computing: Hello students. Welcome back. In this video, we will
understand the server less concept and why it is more popular in
cloud computing. Let's get started. In most of the services
we have seen so far, the development and deployment
process has been the same. Develop your application,
allocate infrastructure, and then deploy and run the application on this
allocated infrastructure. That is how it should be. You always need an
underlying server to run your application code. There has always been a need to provision and maintain
our infrastructure. This is where serverless
comes in the picture. Serverless has been
the catchword. In the cloud computing world, many applications are following
serverless architecture. The term comes from the idea that the infrastructure
used to run your applications no
longer needs to be provisioned or maintained
by you or your team. Server less can also
be confusing chump, because in serverless
architectures, servers do exist. It does not mean that your application does not
have any underlying server. It is just that there is no need to worry about
managing the server. If you are an end user, it might as well not exist because you never
have to care about servers, hence the term serverless. But there is a server and
it's provisioning and maintenance entirely taken
care of by the provider. If we put it in AWS terminology, you no longer need to create
an easy to instance and configure and maintain an operating system
for your application. That means we delegate all
of that responsibility for managing the
underlying server and it's all taken care of. The server can scale automatically and charge you
according to your usage. Serverless services have become extremely popular with many
modern cloud applications. Now let's look at the
benefits of serverless. With serverless, there are
no instances to manage. You don't need to
provision any hardware. There is no management of
operating systems or software. The capacity provisioning
and patching are all handled for
you automatically. It can also be very inexpensive to run
server less services. You are only charged
for what you use. Code only runs when back
end functions are needed by the server less application
server less scales automatically and elastically. It also has built in
high availability. You just deploy the
code as a developer and it automatically
scales up as needed. Suppose a function needs to
be run in multiple instances. In that case, the
servers will start run and terminate as required,
often using containers. As a result, a serverless
application will handle an unusually high
number of requests. But in a traditional
structured application with a limited amount
of server space, it gets overwhelmed when there is a sudden
increase in usage. That's all for this video.
44. AWS Lambda: Hello, students. Welcome back. In this video, we will understand AWS Lambda.
Let's get started. Lambda is a server less
service that lets you run code without provisioning
or managing servers. Unlike as two instances where our application need to
be running continuously, AWS Lambda allows us to write functions that can
be run on demand. Aws Lambda functions
can be triggered by events within or outside
of the AWS account, and they do not have
to keep running. For example, you can write a function to do a
file processing task. And it will be triggered when data is uploaded into
the S three service. Every time a file is
uploaded to S three, you now have a
function that will automatically run and
process this file. Or you can have a lambda
function that fetches values from a database for
your front end application. There are multiple ways
to use lambda functions. It can be used for
anything from running small individual tasks to replacing entire back
end applications. You should also note that AWS lambda functions are only
meant for short executions. You can only run code in Lambda for a maximum
of 15 minutes, which is a reasonable amount
of time for most use cases. Please note here that this
keep changing all the time. I would recommend checking AWS documentation
for recent updates. But if you have a single process that takes more than
15 minutes to run, you will have to deploy it to a server like EC two
instances or container. Let us have a look at some
other features of AWS Lambda. The first feature comes
here, automatic scaling. When we use EC two instances, even though we have autoscaling group to scale our
infrastructure, we still have to create
scaling policies manually. We have to tell AWS to add or remove instances based
on cloud watch alarms. In comparison, AWS Lambda
can scale automatically. Let's take the same example
of the file processing Lambda that runs every time a file
is uploaded to S three. What happens when you upload, let's say 500 files? You will have 500
lambda functions running your file
processing tasks. This is the power of serverless. You need not worry about
scaling your infrastructure. Aws ensures you always have enough resources
to run your code. Note here that AWS has a service quota limit of 1,000 concurrently running
Lambdas per region. In our example, you might see some lambda failures when you upload more than 1,000
files to the S three. This limit has been
placed to avoid misuse of your account and prevent
any unwanted charges. If your use cases
requires more than 1,000 Lambda functions
running concurrently, you can get this limit
increased from AWS support. The next feature comes
pay per user price. Lambda's next and most
lucrative feature is that it is pay per use. This means you pay for
the time your code runs. There are no charges when
your code is not running. Suppose your file processing
lambda runs for 1 minute. You only pay the compute
cost of 1 minute. If you have ten Lambdas
running for 1 minute, you pay for 10 minutes of
compute time, and so on. The next feature is Ram. Lambda functions can get
resources to up to GBs of Rams. It is also important
to note that when you increase the Ram of
your Lambda function, your CPU and
networking capability of Lambda will also improve. This is a Trendy exam topic. How will you increase the
CPU and network of Lambda? The answer is need
to increase the Ram. The next feature is integration. As I said before, AWS Lambda has a lot
of use cases and it has been integrated with
a lot of AWS services. Let us have a look at some of the main integrations
and their use cases. The first use case in S three, like in the example
we have taken so far, you can have a lambda run on different types of
events in S three. The next is Dynamodb. You can create a trigger
so that Lambda will trigger whenever an event
happens in our Dynamodb, which is a database service. The next is API gateway. Lambda and API gateway
integration is one of Lambda's most
widely used use cases. Api Gateway is an AWS service that is used to
create rest APIs. In AWS, you can
put lambda behind an API gateway and create a Rest API endpoint to
call your lambda function. The next is SNS, you can have a lambda
function that reacts to notifications
in an SNS topic. Sns is further integrated with possibly all kinds of
events in the AWS console. With this integration, all these events can be used
to trigger Lambda functions. The next one is SQS. Lambda can be used to
process messages in SQS Qs. The last one is
Cloud Watch events. You can also run lambda functions with
Cloud Watch events. Let us finally look at some small examples
of Lambda functions. As you can see, we have our front end application
running on EC two instances. Then we have an API gateway that calls our lambda
functions for the back end. This lambda function has
also been integrated with RDS and fetches and stores
data in the RDS database. On the other side, we have
the front end application, pushing tasks to the SQS cues, which has been integrated
with another Lambda. For message processing, you can create a cloudwatch event that triggers at
regular intervals. Since you can trigger Lambda
from a Cloudwatch event, you can have a Lambda
function that runs regularly. This is a popular use case and exam question how to run a lambda to run a
command based job. The answer is Cloudwatch
event triggers. That's all for this session.
45. Data Types: Hello, students. Welcome back. As you know, this module
is all about databases. But before we get
into databases, we will quickly understand
data and its types. We decide on an
appropriate database based on the nature of our data. Therefore, it is very important to understand data and its type. Let's begin with
data source types. There are three types
of data sources, structured, unstructured,
and semi structured. Let's understand
structured data first. It is data that has predefined data types and is stored
in a specified format. Let's understand the
predefined data type and specific format with
a phone book example. In this table, you can
see that the top row has different names which define what kind of data is present
in different columns. For example, the serial number represents the number
of data sets we have. The first name column will have the first name of every
entry in our data. This is a format if we want to add any other personal
detail this data, then we will have to create
a new column for it. Let's say you want to
add an address as well. In that case, you need to add an additional header,
named address. Then only you can add
addresses to this data. Basically, this is
a specific format. Now let's understand
the data types. As you can see that
we have numbers in the serial number and
phone number columns, and the first name and last
name columns have correctors. As per the defined columns, you are expected to
enter numbers in the serial number and
phone number columns and characters in the first
name, last name column. That means it is
defined what kind of data we can enter
into different columns. Structure data is stored as
a series of data values in defined tables managed by a database engine
such as my Esque. This data is highly structured. That's why it is
very easy to analyze and can be used in
highly complex queries. The next data type is
unstructured data. Unstructured data has
an internal structure, but does not contain a
predefined data model. Consider a video, for example. It is also data,
but the way it is stored is different
from structured data. This type of data is
usually in the form of files that are stored in a
storage such as Dropbox. Dropbox is like a storage unit where you can store text files, photos, videos, et cetera. The unstructured data as a whole is irrelevant
information. Examples of unstructured
data are text messages, documents, videos,
photos, and other images. These files are not organized, they can only be placed into a file system or object store. If you want to get meaningful information
out of this data, you need to process it. You require special tools to query and process the
unstructured data. The last data type is
semistructured data. Semistructured data is a
form of structured data. The only difference is that semistructured data
is flexible and can be updated without changing the schema for every
single record in a table. This is an example of
semistructured data. As you can see here, there are two sets of data and
they look the same, right? If you notice the
second data has one additional
information address. Semistructured data allows
a user to capture data in any structure as data
evolves and changes over time. Semis, structure data is
stored in Jason files that are loaded into a database
engine such as Mongo Devi. This data is highly flexible
because the structure is not strict and can be changed
as needed within the table. You can also analyze
semi structured data. That's all I wanted to
cover in this session.
46. Relational Databases: Hello students. Welcome back. In the last video, you understood three
types of data types, structured, unstructured
and semi structured. As the nature of these
data is different, they will be stored differently. In this video, we are going to understand where and how
these data will be stored. By the end of this video, you will know why we store structured data in a
relational database. Let's start with an example. We often use an ex Cel
sheet to store data, right? That's because it is
easier to organize and manage the data in Excel
than in any other tool. This type of data is
known as structure data. We discussed this in
the last session. Excel sheets are fine
for personal data, but if you think
about big businesses, that's a different
world altogether. They deal with huge
amounts of data, which is very challenging
to store and manage. When data grows,
it often leads to data inconsistency and
redundancy as well. Suppose we have a spreadsheet stores student course
registration data. Now take a look at Sham
Singh student with student ID number 101 is
stored more than once. This is called data redundancy. Another problem could be, let's say Samsung
changes his last name. In that case, his last name must be changed everywhere
in our data. Otherwise, it could lead
to data inconsistency. For small data systems, it could be easier to solve. But when the data
system is huge, it is difficult to
manage the data. In order to avoid
these problems, we use data based technology. It is an organized
collection of data so that it can be easily
accessed and managed. Now let's say you want to
send an e mail to Kishor. Thus, there is another sheet that holds students
detail like e mail, ID, phone number, et cetera. From the first sheet, we know that Kiss is in HR, but we cannot find Kiss in
the student details sheet. This means our data
is inconsistent. We can solve this
problem with some rules, like a student cannot register in the course
without registration. This way we will have all the sheets populated
for each student. The point here is that
the two Excel sheets have a relation and follow a certain
rule to organized data. This is known as relation data, which brings the concept
of relational database. A database is an organized
collection of related data. It is an organized collection
because in a database, all datasets are described
and related to one another. Data is organized into relations like in a relational database. Each table has a set of that
define the data structure, data schema stored in the table. A record is one set
of fields in a table. You can think of
records as rows or tuples of the table
and fields as columns. In this example,
you have a table of student data with each
representing a student record, and each column representing one field of the student record. A special field or a
combination of fields that determines the unique record
is called the primary key. For example, student ID. It is unique for each student. This key is usually the unique identification
number of the records. It uses a structure that
allows it to identify and access data in relation to another piece of data
in the database. As you can see, each student
has a unique ID that can be used to find the record of the same student in the
student registration. As you see, these data
have a certain structure, which is known as schema. Relational databases
stored structure data, they allow you to
rapidly collect a and queried data using
a structured data model. The use of tables to
store data in columns and rows makes it easy to
access and manage data. Aws offers database solution for relational databases that we will cover in the
introduction to RDS module. That's all for this session.
47. NoSQL Databases: Hello, students. Welcome back. In this video, we will
understand the Nosql database. Let's start with a quick
review of the previous video. We understood that
the data within a relational database
is organized in a way that the values in one table can define or change the
values in another table. In other words, a relationship exists between the
different tables of data. Let's take the same example of a relational database storing student course
registration details. One table contains details of students and another defines the course which
students registered for in a relational database. You can run a single
SQL query that returns a result showing all of the students who registered
from accounting, even though the data
is in two tables. This is the superpower
of relational databases. Now let's talk about non
relational databases. This type of database
stores data differently. Due to the nature of data, we store semi structured data in a non relational database. Now let's take the same student course registration
database example. If you have to store student course registration data in a non relational database, every information of a student, including the associated course, would be stored in a single item within
the database table. Now let's deep dive
and understand how data is stored in a
non relational database. Let's say after one year, the college decided to add the 12 standard grade to the student registration
table in the database. How would each database
type need to change? Well, in the case of
relational databases, you need to update the schema to add the
grades of students. This means adding a new
column for the grade. Once you add the new column, it will be empty for the
existing data, right? Which can cause problems. Because as a rule, relational databases
require you to enter values in every column if it's set
to prove for the database. You need to create a process to add the grade for every
record in the table. Once this process is completed, you will begin to add new
records for grade. All right. Any guesses on what you need to do in a non relational database? Well, no need to guess. All you need to do
is create new items. With the new attributes, you might go back and look
at the existing records. You are not required to
do anything with them. This is one of the
advantages that non relational databases have
over relational databases. Non relational schemas
do not require you to define new attributes of
data before you use them. While in a relational database, you have to update the schema before you can add
new fields of data. When it comes to storing
and dealing with semi structured or
unstructured data, the choice is often a
non relational database. That's all for this lesson
on non relational databases.
48. On Premise vs Cloud Database Hosting: Hello, students. Welcome back. In this video, you
will learn how we use to manage databases in
the on premises scenario, and what are the different
database options in the AWS cloud. Let's start with on
premise databases. They operate on hardware that your organization
owns and maintains. These systems are handled by a team of database
administrators, DBAs, who are responsible for the entire setup and
working with the database. In the case of on premise, the company deploys
several servers and networking devices to handle
the system's peak load. Once they get the server from
the infrastructure team, they install the
operating system and prepare an OS
patching schedule. The next is a database
software installation. Again, they need to plan
and do database patches. Once the database is ready, it can be used by
the application. Now, these databases
need to be managed, updated, upgraded, and
optimized over time. The next considerable effort goes into database backup and high availability setup for the business critical
applications in case of high
availability configuration. Again, the DBS of your team need to follow
the same steps for another DB server that will act as standby for
failover scenarios. On top of that, you are also responsible for scaling
your database server now. And then that's a lot of work. Now when it comes to the Cloud, we have two options. The first one is
like an on premises database where you
rent a virtual machine that comes with networking and a server that is managed
and taken care of by AW. However, the rest of the things will be
managed by yourself. Again, it requires effort and a team of DBS to take
care of OS patches, DB software installation
and patching, DB backup, high availability
scaling, et cetera. The second one is Amazon RDS. Amazon RDS is an AWS managed
database where you get the database out of the box in a few clicks that we will
cover in the next lesson. That's all for this session.
49. Amazon Relational Database Service: Hello students. Welcome back. In this video, you will learn about the Amazon Relational
Database Service, commonly known as Amazon Ardis. Let's get started. Let's have a look at Amazon Ardis overview. It is a relational
database service that Led, set up, operate, and scale a relational database
in the cloud. As you know, managing
a database in either an on premise or easy to instance is
not an easy task. Amazon Ardis is designed to
minimize the efforts involved in managing relational
database tasks such as hardware provisioning, database set up, patching
backups, and et cetera. You just have to select the
database that you want to launch and we'll have a database ready in
just a few clicks. Let's understand
with an example. Suppose you have a team
of four people and you want to launch an
application that will be backed by
a my SQL database. You installed my SQL on
the EC two instance. Since we know that it
requires a lot of work to set up OS database
backup, patching, it is likely that
the development work might fall behind
that could further delay the application launch as only two people can
focus on development. Imagine the same example again, but this time with Amazon RDS. As RDS is a managed service, AWS will take care of all
database related work for you. Your developers can focus
on application development. Rds was designed to help you reduce the database
management cost. It is a managed service that automates the provisioning
of databases. Aws will do the patching
of the operating system. You will have
continuous backups and restore options with a
point in time restore. You will also have monitoring dashboards to see if our
database is doing good. You can scale the
reads by creating a Rid replica and improving
the Rid performance. You will have a way to set up multi availability
zones to make sure that your application has a plan for disaster recovery in case an entire availability
zone goes down. Finally, you can also set
up maintenance windows for upgrades and can scale
vertically or horizontally. The only thing that we cannot
do with an RDS database is that we cannot SSH into
your RDS database instance. Now let's look at some features and elements of Amazon RDS. Let's have a look at
first RDS availability. Amazon Adis provides
higher availability and durability through the use of multi availability
Jones deployments. This means that
Amazon Dias creates multiple instances
of the database in different ability zones. In case of any
infrastructure failure, Amazon Adis
automatically switches to standby in another
availability zones. Database operations will resume as soon as the
failover is complete. Since Amazon Dias uses a DNS service to identify
the new master instance, you do not need to update your database
connection endpoint. That's a very good feature. The second one is
Dias instance type. When you build your first
Amazon Dias database, you have to make a
few key decisions. First, you need to decide
your database instance type, which determines your
database resources like CPU and memory. It's somewhat similar to the AW, EC two instance type. The next is Dias
Database Engine. Next is the kind of database
engine you want to run. You can choose from
Postgresql, SQL, Maria DB, Oracle Database, Quel server, and Amazon Aurora. Amazon Aurora is a MyesquL and Post GrayquL
compatible relational database built for the Cloud. The database is designed
and developed by AWS to provide enterprise
class database performance. Each database engine has its own unique
characteristics and features you can select based on your application
and business requirements. Now have a look on dis pricing. Now one of the
biggest benefits of Amazon Ardis is that
you pay as you go. The Amazon Ardis billing process consists of two
parts, instance cost. The first thing you pay for is the instance that
host the databases. There are two pricing models on demand and reserved on demand. Instance pricing
lets you pay for the compute capacity
on an hourly basis. This is when your database runs intermittently or is a
little unpredictable. Reserved instance pricing
is great when you have a good understanding of the resource consumption
of your database. With this type of instance, you can secure a one
or three year term and receive a significant discount
over on demand pricing. Now let's have a look
on the storage cost. Second, that you pay
for the storage and by consumed by your
database storage is billed per
gigabyte, per month. By is billed per
million requests. I hope you have a good
understanding of Amazon Dis. Watch the video again. If you have any doubts
that's all for this video.
50. Amazon DynamoDB: Hello, students. Welcome back. In this session, we are
going to understand the AWS NOSQL database
offering Dynamo DV. Let's get started. Let's have a look at Dynamo DV overview. It is a fully managed Nosql
database service that provides fast and
predictable performance with seamless scalability. Dynamo DB is one of
the best options to store NOSQL semis
structure data. As this is a fully
managed database, you don't have to deal with
hardware provisioning, set up configuration
replication, software patches, or cluster scaling when
you use Dynamo DB, another great thing
about Dynamo DB is that it offers encryption
at rest by default. Which eliminates the
operational burden and complexity involved in
managing encryptions. Crees to protect sensitive data. If it does not make
sense, don't worry, you just remember it supports
encryption by default. I will cover encryption in details in the AWS
management module. We create a database table in Dynamo DB to store and
retrieve our data. It's important to
understand here, let me simplify this. When you create an
Amazon RDS instance, you create a database inside it, and then inside the database
you will create a table. But in the case of Dynamodb, you create the table
directly inside Dynamodb. When you go to the
Dynamodb dashboard, you will create a table. A Dynamodb table can
scale horizontally. It distributes the data across different
backend servers to make sure that you can scale whenever required and get
excellent performance. Now let's understand how we
store data inside Dynamodb. A Dynamo DB table is made up of three
different components. Tables, items, and attributes. First, you have
the table itself, and that's where all of your data is stored
within the table. A table is a
collection of items. Each item is a collection
of attributes. Items are like rows in
a relational database. For instance, the student with
student ID 101 is an item. Each item has a series of attributes associated with each. It is the data that we
see in these fields here, such as the last name, first name, and e mail. Dynamo DB uses primary keys to uniquely identify
items in a table, and secondary indexes to
provide more query flexibility. You can use Dynamo DB
streams to capture data modification events
in Dynamodb tables. Dynamodb table performance is based on throughput capacity. You can scale up or scale down
the throughput capacity of your tables without any downtime or
performance degradation. You can use the AWS
management console to monitor resource utilization
and performance metrics. Dynamo DB provides the on
demand backup capability which allows you to create full backups of your tables for long term retention and archival for regulatory
compliance needs. You can also create on
demand backups and enable Point in Time recovery for
your Amazon Dynamodb table. Point in Time Recovery
helps protect your tables from accidental
rights or delete operations. With Point In Time Recovery, you can restore a table to any point in time during
the last 35 days. It also allows you
to automatically delete expired items
from tables to help you reduce
storage usage and the cost of storing data that
is no longer relevant. Now let's look at some
use cases for Dynamodb. Due to its flexible data model
and reliable performance, it is a great fit for
mobile based applications, web based applications,
gaming applications, tech and IOT applications. You could use Dynamodb for
other applications as well. Students, I hope
you now understand Amazon Dynamodb features
and applications.
51. Create RDS Instance: Hello, students. Welcome back. In this session, we will be
creating a RDS instance. We will see what other resources
and configurations are needed to spin up an
RDS SQL instance. Let's go to the AWS console now. Type RDS in the search bar. Click on the RDS option and it will take you to
the RDS dashboard. The first thing you need
to do when creating RDS instances is to
create a submit group. A subnet group is simply
a logical grouping of subnets where you
provision the Dias instance. Let us go to the subnet
groups page by clicking here. As you can see, I have no
subnet group in my account. Click on Create DB Subnet Group assign a name to
the subnet group. You will need to add a description here as
it is a required field. I will add Submit
Group for my SQL DB. Next, select BPC. I have only one and I
will choose this one. Here, you need to select
the availability zone. Select the availability
zones where you want the Ardis instances
to be deployed. For me, I only have submits
in P one and P South one B. If you have submits in the
other availability zone, you can select AP
south one as well. Next we select the subnets
in these availability zones. As you can see, I
have a public submit and a private submit in each
of the availability zones. Usually, you would
want to create RDS instances in
private submits, select the private subnets. Here, RDS instances are usually created in private subnets and accessed from within VPC's. In production applications,
you do not want your databases to be accessible over the Internet.
Now click Create. As you can see, we have now successfully created a subnet
group for our database. Let us now create
the RDS database. Click on databases. As of now, I have no RDS DBs in my account. Before we move on to the
Create Database button here, if you see next to this button, we have a restore
from S three button. When we take up backups
of our RDS DBs, they are restored
in AWS S three. By clicking on this button Here, we can create a new instance from one of our
backups in S three. Moving back to
creating the database, click on Create Database. The first option you can see is Standard Create or As Create. If you choose the
As Create option, AWS will use some default values for creating your DB.
Let us have a look. I have selected As Create. Now the only options I have are the database types and
some other options. Here, if we scroll down and
open the drop down view, default settings for As Create, we can see that AWS has hard
coded some values for us. Don't want to create
our B like this. Let us move back to
Standard Creation and choose the
parameters ourselves. Click Standard
Create. At the top, choose the My SQL Engine type. Next we have the option to
choose the My SQL version. I will choose the latest version available for me right now, which is my 8.0 0.28 The next option we have is to choose the
template as per our use case. If you choose production here, AWS will choose some
default options for us. For simplicity, like when I have a production template
selected and I scroll down to
availability and durability, You can see the default
option selected for me is to create a
standby instance. Now when I change the template to Dave
and scroll down again, you will notice that the
default value chosen for me now is not to
create a standby DB. Selecting a template will simply set some default values for us. Let us stick to the free tire and select the free
tire template. The next option we have
is the DB identifier. This is the name of
our DB instance. You should note here that
this is not the name of the database but the name
of the RDS instance. Let me change it to the
cloud advisory, my SQL DB. Next, we have to create the master username and
password for the RDS instance. Changing the username to RDS master and I will
type my password here. We can have AWS autogenerate password
for this username here. For now, let me add
my own password. The next option we have is to select the instance
class for our DB. Since we selected the
free tire template, we cannot change the
instance types here. As you can see, we only have
one option, two point micro. If we select Dave
or Prod templates, we will get all the
instance options here. Next we have to select the storage type for
our RDS instance. The current option we have is the general purpose SSD GP two. We can change this value to I 01 for more complex work loads. Now let us stick to
the GP two SSD type. We can also enable auto
scaling for our RDS storage. If this checkbox is selected, we can add the maximum storage. We want our RDS
instance to scale up to the default value for
this field is 1,000 ZB. If my database storage
gets filled up, AWS will automatically scale the storage to up to 1,000 GBs. The next option is
availability and durability. We saw earlier that we
had this option when we had selected the Dave
and production templates. However, this option is not
supported in free tire. Next, we have the
option to select the connectivity for
our RDS instance. We can select the VPC in which we created
our subnet group, and then select
the submit group. In the next option, we
have the next option. If we want our RDS instance to be accessible
outside the VPC. Aws gives us another option here to not allow public
access for our DB. Next, we have to specify the
security group for our RDS. Let us choose to
create a new SG. Give it the name RDS, my SQL SG. We can see the default port
for my SQL database here. Let us leave this as it is. The next option is
database authentication. We have three
options here for us. We will be choosing
password authentication. If you want more security
and control over RDS access, you can choose the
other options here. Let us also see the
additional options for RDS by clicking this
drop down here. The first option
is database name. If you remember earlier, we had the option to add the database identifier which was the name of the
RDS database instance. Here we can ask AWS to create
a database like users, employees, et cetera,
inside the cloud, advisory my SQL DB. If we leave this value empty, AWS will not create a DB for us. And we will have to do that using my SQL commands when
we connect to this instance. Let us create a user's database. For our example, I am leaving the other
values as default. Next we have the
automatic backup option. We can choose here how
frequently we want to take backups of our DB and even specify the time
window for the backup. The next option is
enhanced monitoring. This is a popular
question in AW exams. The difference between normal
cloudwatch monitoring and enhanced monitoring is that in the case of
normal monitoring, RDS gathers the metrics
from the hypervisor. For a DB instance, enhanced monitoring
gathers its metrics from an agent on the instance. If you want OS level metrics, you will need to enable enhanced monitoring
for our use case, we will not select enhanced
monitoring option. Now we can choose
the log exports. You can have AWS send logs to Cloud Watch logs by
selecting the options. Here we have the next
option to allow WS to do minor version upgrades on our DB and select the time windows
for these upgrades. Let us leave these
values as default. The final option we have is
to enable delete protection. The simple option will not allow us to delete our RDS instance. Click Create Database. Now on the database page we can see our my SQL
database creating. Let me skip the video till this instance
has been created. As you can see now my
RDS instance is created. Let us now click on the instance and see the connection details. In the Connectivity
and security tab, you can see the endpoint and the port of this RDS instance. These values will be needed
to access the RDS instance. With this, we come to
the end of the session. We looked at all the
parameters we have when creating the RDS subnet
groups and databases. We created our own RDS database. That's all for this session. I will see you in
the next lesson.
52. Amazon Elasticache : Hello students. Welcome back. In this video, we will
learn about one of the core AWS services,
Amazon Elastic Cache. Let us start with the basic
Ms involved in elastic cache. Service caching is the technique of storing data in a cache. It allows you to reuse data that has been
previously retrived. The future request for
Dat data is served faster instead of retriving
from the primary location. Now you must be wondering
what's a cache is. A cache is a reserved
storage location that collects temporary data. It could be computer memory
or your application itself. A simple example will be storing documents on
a computer desktop. More often, we keep a copy of the frequently accessed
document on the desktop. We do not need to go to the
document location each time. Another good example
will be a bookmark. Bookmark links that
we use more often. There is no need to
search each time. Now let's understand
what elastic cache is. Elastic Cache is a
fully managed in memory database caching
solution from AWS. It supports a variety of customizable and
real time use cases. Elastic cache is
suitable for caching, which improves the performance of applications and databases. We put this in front
of other databases, maybe in front of RDS or
in front of Dynamo DB. It can also be used as the main data store for
use cases like CSN stores, gaming, leader boards, streaming and analytics that
don't require durability. Elastic cache is the
implementation of the open source database engines known as Dies and Memcached. Now let's look at some use
cases of elastic cache. It accelerates
application performance. As I have mentioned, elastic cache works as an in memory data
store and cache to support the most
demanding applications requiring sub millisecond
response times. The next use case is that it is back end database load
apart from the application. It also helps to reduce
the load on the database. Once you cache frequent
access data to elastic cache, it will drastically decrease database calls and reduce pressure on your
back end database. The next is it build low
latency data stores. This is an excellent
solution for a low latency database
when you want to store non durable datasets
in memory and support real time applications
with microsecond latency. Now let's look at how
elastic cache works. You have an application
configured and installed on EC to instance
in your AWS account. This application use RDS
database to store backend data. Now the application is going
to write some data to RDS. The application can read database on the RDS
database as well. Now what can you do? Put elastic cache in between application and RDS
and store session, or frequently access
data on elastic cache. That data then gets loaded
into elastic cache. Which means that the next
time the application needs to read the data
from the database, it gets what's
called a cache here. That means it is found in the
database and it's going to be retrieved a lot faster
than coming from RDS. That's all for this session.
53. Amazon Redshift: Hello, students. Welcome back. In this video, we
will understand what data warehouse
is, why we need it. Aws Data warehouse service, Red Shift. Let's get started. You likely have heard about data warehouses but are not
sure exactly what it is. Let me explain to you. As the name suggests, it is a warehouse of data. It allows you to store large amounts of data
from multiple sources. Which could be from
application log files, transactional systems,
relational databases, marketing, sales, finance,
customer facing application, external partner systems,
and other sources. Basically, a data warehouse
is a large collection of business data used to help
organizations make decisions. Now you must be thinking, how will this data help
make a business decision? The goal of every
business is to make better business decision than
their competitors, right? How do you make a
better decision? Either it comes from experience or you can
do some mathematics. Agree to do mathematics,
you need data. When it comes to big
enterprises and organizations, they use business
intelligence tools. B I tools analyzes data
and generates the report. Nowadays, most business
users depend on reports, dashboards, and analytics tool to extract insights
from their data. Data warehouses power
these reports boards and analytic tools by
storing data efficiently. Data warehouse is not a new concept and it has
existed since the 90, 80, but it has not been in use due to the huge cost of
storing the data on premises. Cloud computing
revolutionalizes this as storing data in the
cloud is easy and cheap. That's where AWS has a service
called Amazon Redshift. What is Amazon Redshift? Amazon Redshift is a fast, fully managed petabyte scale data warehouse
offering from AWS. Petabytes means it can satisfy the scalability needs
of most enterprises. It is not just
limited to terabytes, but 1,000 multiples of it. This level of
scalability is difficult to achieve with on
premise implementation. It is a simple and cost
effective service to analyze data efficiently using business
intelligence tools. It's an SQL based data warehouse and its primary use case
is Analytics workloads. Redshift uses C two instances. You must choose an
instance type and family. It is paid as you go based on the instances
you have provisioned. Redshift will always keep
free copies of your data, and it provides continuous
and incremental backups. Anytime you see the
database needs to be a warehouse and to
do analytics on it, then Redshift will
be the solution. That's all for this lesson.
54. Autoscaling: Hello, students. Welcome back. In this video, you will
learn what is auto scaling? What is auto scaling in AWS? And what are the benefits of
using the scaling services. But before we get started, let's see how scaling works
in a traditional data center. Well, scaling is
reactive and manual. In traditional data centers, reactive scaling means
that servers are manually brought up and down as per the
changes in workload. That means as the
workload increases, we add more servers, and when the workload decreases, we remove the servers. Let's understand this
with an example. Suppose there are
application servers which are accessed by the
users of the application. As the users of this application
will increase over time, load on the server will also increase if the number
of users keep rising. There will be a time when
your server will reach its full capacity
and there will not be enough memory and CPU
to serve the new users. The administrator
of the application needs to monitor
server capacity. He can monitor it manually or
with the help of any tool. When he sees that the application server has
reached its full capacity, he will add a new server and configure the application
on the new server. Again, this process
can take some time based on the availability of servers or the
availability of server. Engineers add the server. Sometimes scaling means
buying new servers. To do so, you will have to get approval from
the management. It can take several months
for the servers to arrive. As you can see, this is a time consuming process
and is not sustainable. Another issue with
this process is that once the load on
the server goes down, you will have to remove
the extra server, otherwise it will
be running idle. Cloud Computing solves
all these problems as it is a lot easier to add and remove compute resources depending upon the load
on the application. Now the main question is, how will it happen
automatically? Well, it is accomplished
through auto scaling. It is a feature of
cloud computing that automatically adds and removes compute
resources depending on actual uses and demand. This feature is available
only in a cloud environment. Autoscaling provides
flexibility and elasticity to your
compute demands. Hence, it is sometimes referred
to as elasticity as well. Elasticity means
adding or removing compute resources based
on uses and demand. If the demand goes up, it will increase the
number of resources, and if it goes down, it will remove resources. Autoscaling ensures a seamless
increase in resources when the demand spikes and a seamless decrease when
the demand drops. Therefore, it ensures a consistent application
performance at a lower cost. Autoscaling is so
efficient that it will only add the resources
when demand goes up, and immediately removes
them when it goes down. Again, this way you do
not need to keep running the additional resources all the time to match the
peak hours demand. Autoscaling is a general
term in cloud computing. Now let's understand what
autoscaling is in AWS. Aws. Autoscaling is a
service that monitors your application
and automatically adjusts the capacity
to maintain steady, predictable application
performance at the lowest cost. The key points are
that it monitors your application and
based on the need, it adds or removes
additional capacity to maintain application
performance at the lowest possible cost. How exactly does it
monitor your application? Aws Auto Scaling uses an AWS service called Cloudwatch to monitor
the application demand. And it raises alarms
to scale up or down the resources depending
upon your application needs. We will cover Cloudwatch in detail in the
upcoming module. For now, Cloudwatch is
a monitoring service. And it monitors application
metrics like CPU usage, memory network, and many more. You can set a threshold
value and the service will trigger an alarm when it
reaches that threshold. For example, if CPU
utilization goes beyond 80% then it will add a
server and remove it. If it goes below
30% Auto scaling uses Cloud Watch to monitor the application and based
on the requirements, it either or removes
servers to meet the demand. Now let's look at some
benefits of AWS auto scaling. With the help of the
AWS Auto Scaling group, we can automate
adding and removing EC two instances to
meet the actual demand. This is one of the most important achievements
of the cloud. As I mentioned earlier, it brings agility
to adding resources and removing resources to
meet the customer demand. Hence, it removes all
the reactive work of traditional data centers. Another significant benefit
is cost optimization. This is again a big
win in the cloud. You do not need to run extra resources to
meet the peak demand. It will ensure that you pay for only what is required
to serve your customer. You no longer need to pay
for unused resources. Last and most important is
application performance. Because it helps you add and remove resources
to meet the demand, the application always
performs better. Your customer will not face any latency and slowness
in the application. That's all for this session.
55. EC2 Auto Scaling Group: Hello students. Welcome back. We will continue with auto
scaling in this video and understand the elements
of EC two autoscaling group. Let's get started.
So far you have understood autoscaling adds
and removes EC two instances. Now let's understand
how this works. You need to create an
auto scaling group which will scale Amazon
C two instances. This is where autoscaling adds and removes
EC two instances. This means that the EC
two autoscaling group ensures that you have enough Amazon C two instances available to handle the
load on your application. It is done with C two
autoscaling group configuration. There are three configurations, Minimum size, desired
capacity, and maximum size. Based on these configurations, autoscaling will
adjust the numbers of instances within the minimum and the maximum number
of EC two instances. The desired capacity
configuration will determine the size of
an auto scaling group. By definition, an auto scaling
group is a collection of Amazon C two instances grouped logically for automatic
scaling and management. That's pretty much clear, this is what we have
been discussing so far. An auto scaling group
also enables you to use Amazon EC two auto
scaling features such as health check replacements
and scaling policies. It maintains the
number of instances by performing periodic health
checks on the instances. If any instance
becomes unhealthy, the autoscaling
group will terminate the unhealthy instance and launch another instance
to replace it. Another feature scaling policy that I will cover in
the next session. Now let's understand the autoscaling group
size configuration. As it's crucial to
understand from the cloud practitioner
certification and interview perspective. Minimum size defines
the minimum number of instances that should be running in an auto
scaling group. In this case we have one. The minimum size
ensures that you always have a fixed number
of instances running. The autoscale group will never terminate instances
below this number. Desired capacity determines how many EC two instances
you want to run. Ideally, in this
case we have two. Autoscaling will try to maintain two C two instances
all the time. The desired capacity
is resizable between the minimum and
maximum size limits. It must be greater than or
equal to the minimum size of the group and less than or equal to the maximum
size of the group. Maximum size is the number of EC two instances allowed to run. This is the maximum number of instances running in
the autoscaling group. Autoscale groups will
never create more than the maximum number of
instances specified. To summarize, Amazon C
two auto scaling group maintains the number of instances and scales
automatically. That's all I wanted to
cover in this video.
56. EC2 Autoscaling Polices: Students, welcome back. So far we have understood
that autoscaling needs autoscaling group to maintain and scale C two
instances, right? This pretty much depends on autoscaling group
configuration. Like minimum size, maximum
size, and desired capacity. We also understood in
the last video that the desired capacity
should change based on the load and
traffic on your application. Now the question comes, how will this number change? How does the number of instances
scale up and scale down? Well, scaling starts with
an event or scaling action instructs the
autoscaling group to either launch or
terminate the Amazon EC. Two instances, that's where autoscaling policies
come into existence. There are different types of autoscaling policies that are designed for various purposes. Let's understand
these. As a note, this is very important topic for example and
interview point of view. I will recommend paying special attention
and making notes. I will also add an additional learning link
in the resources section. The first autoscaling
policy is manual scaling, and this is the most basic
way to scale your resources. Here you change your
autoscaling groups, maximum, minimum or desired
capacity, Amazon C two. Auto scaling manages
the process of creating or
terminating instances according to these updates. If you know when more traffic is likely to flow to
your application, you can manually change the size of an existing
autoscaling group. You can either update
the desired capacity of the autoscaling group or update the instances that are attached
to the autoscaling group. The manual scale can
be useful when you need to maintain a fixed
number of instances. The next is dynamic scaling. In dynamic scaling, we
define how to scale the capacity of the
auto scaling group in response to changing demand. It means you can
increase the number of instances during
the peak demand, also known as scale out, and decrease the instances
during load demand, also known as scaling in. Now let's understand
with an example. Let's say you have a
web application that currently runs on
two C two instances. Your goal is to maintain
the CPU utilization of these instances at around 50% regardless of application loads. It means you need to add an additional EC two instance
whenever CPU utilization crosses 50% but you don't know when it will
reach 50% don't worry. You can configure your
autoscaling group to scale dynamically to meet this need by creating a target tracking step or
simple scaling policy. Now let's understand these
dynamic scaling policies. Step and simple scaling
allow you to choose the scaling metrics
and thresholds for the cloudwatch alarm
that initiate scaling. We will learn Cloud watch in
the AWS management module. You define how your auto
scaling group should be scaled when a threshold reaches a specified
number of times. Let's take the same example. Whenever the average
CPU utilization of all C two instances goes
over 50% for 5 minutes, add 12 instance to the
auto scaling group. Alternatively, whenever
the CPU utilization is less than 30% for 10 minutes, remove one C two instance
from the autoscaling group. This is simple or step
scaling because we define the trigger point to take action and how many instances
to add or remove. You select a scaling metric and set a target
value in the target. Tracking scaling
policies, that's it. It is a very easy way of defining a dynamic
scaling policy. Now Amazon C two auto
scaling will create and manage the cloudwatch alarms that trigger the scaling policy. And calculate the
scaling adjustment based on the metric
and target value. The scaling policy adds or
removes s two instances to keep the metric at or close to the specified
target value. For example, you can configure a target tracking
scaling policy to keep your out scaling group's average aggregate
CPU utilization at 50% Then the ASG will scale automatically to
ensure that it stays around that target of 50% The
next is scheduled scaling. Scheduled scaling
helps to set up your scaling schedule according to predictable load changes. For example, let's
say that every week the traffic to your
wave application starts to increase on Wednesday, remains high on Thursday, and starts to
decrease on Friday. You can configure a
schedule for Amazon EC two, auto scaling to increase
capacity on Wednesday, decrease capacity on Friday. This is very useful when we know that changes are going
to happen ahead of time. We anticipate scaling based
on known users pattern. The next is predictive scaling. It helps to scale
faster by launching capacity in advance of
the forecasted load. The difference is that
dynamic scaling is reactive and it scales
when the demand arises. On the other hand, predictive
scaling scales the capacity based on both real time
metrics and historical data. For example, consider
an application that has high usage during business
hours and low usage. Overnight predictive
scaling can add capacity before the first inflow of traffic at the start
of each business day. It helps your application maintain high availability
and performance when going from a lower
utilization period to a higher utilization period. You don't have to wait for dynamic scaling to react
to changing traffic. You also don't have to spend time reviewing your
applications load patterns and trying to schedule the right amount of capacity
using scheduled scaling. So predictive
scaling uses machine learning to predict future
traffic ahead of time. It will look at the
past traffic patterns and forecast what will happen
to traffic in the future. It will automatically provide
the correct number of EC two instances in advance to match that
predicted period. It is beneficial when
you have recurring on and off workload patterns or applications that
take a long time to initialize that cause
noticeable latency. Now let's go back and understand how the auto
scaling policy completes. The auto scaling process. As we know,
autoscaling depends on various factors like events,
metrics, and thresholds. And based on our requirement, we configured an
auto scaling policy. This policy instructs
auto scaling to add and remove
C two instances. That's it for this session.
57. Launch Configuration & Launch Template: Hello, students. Welcome back. So far we have understood
that autoscaling increases and decreases the number of EC Two instances to
meet the demand. As you know, when you
launch the EC two instance, we need to provide
some configuration. We need to predefine this configuration for
auto scaling as well. This is what we do with launch configuration
and launch template. Let's go ahead and understand
this in more detail. Both the launch configuration
and launch template define the characteristics
of the EC. Two instances that we want to launch when
the demand arises. Aws introduced launch
configuration in 2014 and since then we have been using launch configuration with
EC two auto scaling groups. On the other hand, launch
template was introduced recently and it does a similar task as
launch configuration. Since the launch template
is the latest service, it has some extra features and capabilities to make the
job a little easier for us. Let's get started with
a launch configuration, then we will move forward
to the launch template. A launch configuration is an E two instance configuration
template where we define configurations of our EC
two instance that we want auto scaling to launch when
the demand comes launch. Configuration is the
configuration template of EC two instances
that autoscaling will use to launch
new EC two instances. When we create an
EC two instance, either from AWS management
console or by using AWS CLI, provide different EC two instance
configurations like AM ID, the instance type size, the configuration
of the storage. Which instance uses the Keypair which is used to connect
to that instance. We also define the networking configuration and
security groups. This decide the IP address and port from which the instance
will be accessible. We can define user data, which is a set of
commands or scripts that run while an
instance is launched. We can also define
the IAM roles, which will be attached
to the instance and give permission to
access services. Essentially, this
includes everything that we usually define while
launching an instance. We can define and save all the settings in the
launch configuration, which will act as a template for auto scaling while
launching new instances. Launch configuration
is not editable. You define it once and that
configuration is logged. If we wish to adjust the configuration of a
launch configuration, you need to create a new one and use that new
launch configuration. There are some challenges with launch configuration that are improved in the launch template. As we have seen earlier, the launch template is similar to the launch
configuration. It provides EC two
configuration which is used by the EC two auto scaling group while launching C two instances. Moreover, the launch
template can also be used to launch
EC two instances or fleet of EC two
instances directly from the AWS management
console or AWS CLI. Since the launch template
is the latest service, it includes several
additional features. In addition to the
configuration features, let's have a look
at them one by one. It lets us create
multiple versions of the launch template through
different configurations. We can specify multiple
instance types, even instance types based on instance type
attributes like memory, CPU storage, et cetera. It has the capability to use both on demand and
spot instances, which further leads
to huge savings. It also covers two by
three unlimited features. Placement groups, capacity, reservations, elastic graphics, and dedicated host features are available in the
launch template, but these features were missing in the launch
configuration. It further allows EBS, volume tagging and
elastic interface. Aws recommends using
launch template instead of launch configuration as it offers more
architectural benefits. Now let's understand where
the launch configuration, a launch template fit into
the auto scaling process. If you remember from
previous session, autoscaling depends
on various factors like events, metrics,
and thresholds. Based on our requirement, we configured an
autoscaling policy. This policy instructs
auto scaling to add and remove
EC two instances. Now autoscaling
needs the instance launch configuration to
launch the EC two instance. Right? To launch
C two instances, we use the launch configuration
and launch template to provide EC two configurations to the autoscaling group. Every time autoscaling
launches an EC two instance, it refers to the
launch configuration which we configure
with autoscaling. That's al for this lesson about launch configuration
and launch template.
58. Load balancer: Hello, students. Welcome back. In the last video, we understood how EC
two auto scaling would automatically scale up and down your EC two instances
based on demand. In this lesson, we will learn the general concept
of load balancing, as this will be
the foundation for our next video,
Elastic Load Balancer. Let's get started. Let's assume you enter into a five
star hotel for check in. There are ten receptionists
checking bookings, checking in and checking out. Most customers are
standing in a few lines, which results in an
uneven distribution of customers per line, While other receptionists
are standing around doing nothing but
waiting for customers. Customers are walking in and they have no
idea where to go. But if you think
about the situation, it would be helpful if we
have a host at the entrance. Any customers who
walk in needs to know which has fewer people
and who is efficient. Right now, the customers will be greeted by
a host at the door. He will direct them to the appropriate line in the hotel for
checking, check out. The host keeps an eye on
the receptionists and counts the number of people in line each receptionist
is serving. Now, any new customer
will be directed to the receptionist
with the shortest line, which is the least band up. Consequently, there will be even waiting for lines
among receptionists, allowing customers to be served as efficiently
as possible. Now think about AWS,
Auto Scaling Group. You have multiple
EC two instances running the same application. When a request comes in, how does the request know which EC two instance
will handle it? How can you ensure an even distribution of work
load across C two instances? You need a way to route requests equally to different instances
to process these requests. That's where load balancer
comes into the picture. It is a networking solution
that distributes traffic across multiple servers to improve application performance. Again, this is a piece of
software you can install, manage, update, and scale. A load balancer can be installed for two or more C two instances, but the problem is
the scalability. Let's understand this. You have one load
balancer in front of two C two instances
and your number of EC two instances will
increase based on load. Especially if you
have out scaling then there is no limitation
on C two instances. It can scale from nothing to 100 or even more C two
instances, right? Now, how will one load
balancer handle all requests? It also needs to
be scaled, right? That's where AWS provides another service called
elastic load balancer.
59. What is an Elastic Load Balancer (ELB)?: Hello, students. Welcome back. In this video, we
will understand another AWS service which
is Elastic load balancer. This service works very closely with auto scaling to distribute traffic equally to as two
instances. Let's get started. What is elastic load balancer? This is an AWS service that is used to
distribute the load. It's designed to address the undifferentiated heavy
lifting of load balancing. In the previous lesson, you learned about a load
balancer that is responsible for the distribution of
incoming traffic between available
EC two instances. Elb scales automatically
as traffic to your servers changes
if the traffic grows. Elb enables the load balancer to handle the additional
load by distributing it whenever your EC
two autoscaling group scales up and add a
new EC two instance, the autoscaling service
sends a notification to the elastic load balancer that a new instance is ready
to handle the traffic. Similarly, when the EC
two autoscaling group in that means if it initiates the process to
terminate an instance, it sends a notification
to elastic load balancer. Now it's ELB's job to stop sending the traffic
to the selected instance. After that, it waits for the existing requests
to complete. Once all the existing
requests get fulfilled, then auto scaling terminates the instances without disturbing
the existing customers. The elastic load balancer
automatically routes the incoming application traffic across different instances. It acts as the point of contact
for the incoming traffic. Any traffic coming to your
application first meets with the load balancer and is then sent to an
available instance. It also monitors the
health of the running EC two instances as it sends the traffic requests to the
healthy instances only. All right, as of now, you have learned how elastic load balancer routes external traffic to
your EC two instances. Not only that, but it is also used for internal
traffic re, routing. Let's understand with
an example of ML. Suppose a lot of users
are using their Gmail. They open the Invox, then the page acts
as a web server. A user compiles the message and clicks on the send button. If we talk about what's
happening in the mail server, an app server is
processing your E mails, including storing
them to a database, sending them to the
respective users, et cetera. If more and more users
start using Gmail, the number of web servers
will also increase. Hence, C to auto scale will have to increase the number
of app servers as well. It should be acknowledged to every Ab server that a new app server is available
to accept the traffic. Now imagine you have potentially hundreds of server
instances on both tires. We can solve the
Appserver traffic chaos with an internal ELB as well. Elb will direct traffic to the app server which has the
least outstanding requests. The Wave server does
not know and doesn't care how many Appserver
instances are running. Only ELB handles directing the traffic to
available Apservers. This is known as
decoupled architecture. In this architecture, all the computing components
completely remain autonomous and unaware of each other doing the instructed
task independently. Elastic load balancer is
the service we can use in front of a Waveserver to
distribute the incoming traffic. It is also called
public load balancer. We can also use the elastic
load balancer behind the Waveserver to equally distribute traffic
to the waveserver. As this is working internally, it is called an internal
private load balancer. Students, that's
all for this video.
60. AWS Elastic Load Balancer (ELB): Hello, students. Welcome back. In this video, we are going to understand different
types of ELB. Let's get started. Aws, elastic load balancing, supports four load balancers, application load balancers,
network load balancers, classic load balancers,
and gateway load balancer. Let's understand
them one by one. These load balancers work
with different OSI models. Osi Reference Model stands for Open System Interconnection
Reference Model. It describes how information
is transformed from one software in one computer to another by using a network. This is achieved by dividing the data communication
into several layers, giving control over sending data from one layer to another. As its name suggests, application load
balancers work at the seventh layer of the OSI model, the
application layer. The network load balancer
works at layer four, which is the network layer. The classic load balancer is the oldest and first elastic
load balancer service. It is designed to work at both the application
and network layers. The gateway load balancer is the newest service in the
elastic load balancer family, which works at layer 3.4
Next is protocol lister, which means the
different protocols where these load
balancers can listen. Application load
balancer supports only HTTP and STTPS protocol. The network load balancer
handles TCP and UDP protocols. Gateway load balancer
only listens on IP. As we mentioned earlier, a classic load
balancer can work at both application and
network players. Therefore, it supports both application and network
player protocols like TCP, SSL by TLS, TTP, TTPs. Now let's look at different use cases for
these load balancers. The most common use case for application load balancer
is for web apps. A web app built on the micro
services framework can use ALB as the load balancer before incoming traffic
reaches your EC, two instances or the containers hosted for a service network. Load balancer covers the
remaining scenarios that ALB, for example, the apps that depend on a protocol
apart from STTP, the time sensitive apps, the real time data flow, and the apps dependent on
streaming audio, video, currency codes, et cetera, will benefit from using LB. The gateway load balancer
works at layer three. It makes it easy and cost
effective to deploy, scale, and manage
the availability of third party
virtual appliances. The classic Lode balancer
can be used with almost all use cases of application and
network Lode balancer. But as the application and
the network Lode balancer are the newest and are designed
for a specific purpose, you use them to get
the most out of it. Aws is retiring the
classic load balancer on August 15, 2022. It is not recommended to use. The next is the target type, which means where
the load balancer can direct the traffic. It could be C two instances
fixed IP addresses, AWS, Lambda functions,
amongst others. You can simply relate this
with the above use cases. The application load balancer
supports IP C two instance, and Lambda network load balancer supports IP C two instances. And also application
load balancer can send traffic to ALB as well. The Gateways load balancer supports IP and
EC two instances. The classic load balancer also supports IP and
EC two instances. We can go into detail
in each load balancer, but that is not needed
for this course. You only need to remember these four types
of load balances, types, protocols, and use cases. That's all for this lesson.
61. Using ELB and Auto Scaling Together: Hello, students. Welcome back. In the previous lab,
in the EC two module, we have deployed
the STTDP service to the EC two instance, which we created from
the AWS console. In this lab, we will create EC two instances using
AWS Autoscaling group. We will see how to configure
the AWS autoscaling groups. It will automatically deploy the STTDP service
to all instances in the autoscaling group and see the resulting STTDP
application access via the application
load balancer. Let's go to the AWS console and type C two in
the search bar. Click on C two to go directly
into the EC two dashboard. From the left dashboard, find a click on
Launch Templates. They specify the configuration
of EC two instances. Let's have a look.
Click on Create. Launch Template. Add a Launch template
name packed up, Elting. Let us add Launch Template
for auto scaling. Demo as the description, we can add tax by clicking
this drop down button here. If you want to copy, modify, or create a new template from an existing launch template, you can do so by clicking
on the next drop down here. The next option we have
is to choose the AMI. Let us type Amazon, Linux, two in the search bar and
select the first AMI. As you can see, this AMI
is free tire eligible, which means we won't have
to pay anything for this. The next option is to
choose the instance type. We can see multiple
instance types if you scrawl down here. Since we don't have any special instance
requirements and we also want to
stay in the free tire, let's choose the two
point micro instance. Next, we have to
choose the key pair. We will need it when we
assess H into the instance. Also, you must
specify a keypair, since we will not assess H
to the instance in this lab. Let us leave this as default. If you want to try SSH and
run commands yourself, you can choose an
existing keypair like this or create a new keypair
by clicking on this link. We have to specify the network settings for
our EC two instances. We do not need to
select subnets here. Let's leave this
default for now. Next, we have to configure
the security group. Please note that this
security group will ultimately be attached to the EC two instances
that we create. From this launch template, we can create a
new security group or select an existing
security group. I have the security group
from the previous lab. I can choose this. But let me demonstrate the process of
creating a security group. Once again, click on
Create Security Group, and add the security group
name and description. Choose the correct VPC. Next we will add rules. Click on Add
Security Group Rule, and then Add Security
Group Rules. Select STTP in the type
column in this source. Select Anywhere.
I have not added the SSH rule since we will not
be doing that in this lab. If you plan to try SSH, you need to select a key
payer and add the SSH rule. We have now configured the security group for
our launch template. Next, we can add the EBS
volumes configuration here. Let us skip this. For this lab, we can ags to the launch
template by clicking here, but we have no
such requirements. Let's skip this as well. Click on Advanced Options. We do not need to change
any options here. Move on and scrawl to
the bottom of the page. You will see the user
data option here. Here we can enter
the commands that we want to run when
our instance starts. In the previous lab, we
installed STTPD manually. In this lab, we will add our commands in
this textbox here. Aws will automatically run them when our
instance starts up. At the following
commands update. This is to update the
default packages in our instance, install STTPDoy. This command will
install STTPD on our instance system, CTL. Start STTPD. This command will start the STTPd service on
the EC two instances. This command on the screen will show us a custom STML page. Click on the Create
launch template. We have successfully
created a launch template. Click to view Launch Templates. You can see your launch
template in the list now. Let us now move on to the
Autoscaling Group dashboard. Click Autoscaling Groups from the left dashboard and click
on Create Autoscaling Group. Enter the name packed up ASG and choose the launch
template we just created. Note that you can go to the Create launch template page from this link here as well. Click next, choose the packed up VPC and select public submt
one and public submate two. Select any two default
subnets from the list. If you don't have your own VPC in your account,
then click next. We can add a load balancer
to our autoscaling group, directly from this page here, but we will do that later from the load
balancer dashboard. Let's skip this for now. Click next, we see three
important and useful options. Desired capacity. This value specifies
the desired number of instances that
you want to have. Let us change this to three. Next we have the
minimum capacity. This value specifies
the minimum number of instances you want auto
scaling group to have. If you have minimum
capacity of two, AWS will ensure you will always have a
minimum of two instances. Let us change this value to two. Next we have the max capacity. This value specifies the
max number of instances that you want autoscaling group to have when it scales up. If you have a max
capacity of four, let us change this value
to four instances. We can add the target
tracking scaling policy by selecting the option
here for our example, we will add simple
scaling policies to our autoscaling group later. Click next, we can add
notification on this page. For example, we can create
an SNS topic that sends us an E mail to us every time autoscaling group
scales or down, we have no such requirements. Click next, we can add tag
here, let us skip this. Click Next, Review your
Autoscaling Group, and then click Create
Autoscaling Group. We can see our autoscaling group listed with the desired
capacity of three, minimum capacity of two, and a max capacity of four. The status, so updating
capacity since we have the desired capacity
of three and currently this autoscaling
group has no instances, it will try to create three
C two instances for us. You can view this by selecting your autoscaling group and
clicking the activity. You will see three successful
instance launches. In the cause column, you can see that
Autoscaling Group service is trying to increase capacity 0-3 Before we go to the EC two dashboard
and see our instances, let us add scaling policies
to the Autoscaling group. Click on the automatic
scaling tab. Next to the activity tab, you will see we have no policies added to
our autoscaling group. Let us create a dynamic
scaling policy. Choose simple scaling from the policy type
and add the name. Add one instance. When CPU utilization
is lesser than autoscaling group
needs Cloudwatch alarm to track the CPU utilization. Let us create the
alarm by clicking here on the CW alarm page. Click on Select
Metric, Click two. For C two metrics, we need the overall
autoscaling group metric and not per instance
metric for this alarm. Select by autoscaling
group and find CPU utilization under
the metric name. Select the CPU utilization
row and click Select Metric. You can see the CPU utilization
graph on your left. We want our autoscaling
group to scale up. The CPU utilization is over 80% Scroll down and
in the condition box, select greater and
enter the value 80. This means that our
alarm will trigger when the CPU utilization
value is more than 80. We don't need to change the other parameters
for the alarm. Click next, we can add notifications and
actions on this page, but let us skip this for now. We will add an alarm action on the Autoscaling group page. Click next, enter the
name and description and create the alarm back on
the Autoscaling Group page. Refresh the alarm option. Find your scale up
alarm in the list. Select the alarm in the action. Choose to add one capacity unit. This will add one instance to our ASG every time
this alarm triggers. We have created our
simple scaling policy to scale up the
auto scaling room. Let us create another
policy to scale down. Click Create Dynamic
Scaling policy. Select Simple scaling. Add the name. Remove
one instance, when CPU utilization
is lesser than 30. We will also have to
create a new alarm. Click on the Create Alarm link. Click select Metric. Ec to metric by Autoscaling group and select
the CPU utilization metric. This time we want our
alarm to trigger when CPU utilization is below 30%
in the alarm condition box. Select lower and
add the value 30. Click next, skip
the notifications, add a name and description, and create the alarm on the
Autoscaling Group page. Refresh, select the alarm. Choose to remove one capacity
unit and create the policy. Let us see these
policies in action. We just added a policy to scale down when
CPU utilization is below 30% Since we do not have any
workloads on our EC two, the average CPU
utilization will be below 30% triggering this alarm. If you see the activity tab
of your auto scaling group, you will soon notice
that AWS decreased the desired capacity 3-2 and is now terminating
the instance. In the case section,
you will see that this is because we just created the policy even after
our ASG has scaled down. If you see it in
the Cloudwatch tab, the scale down alarm is still
in the triggered state. This is because we still
have no processes running on our EC two instances and
the CPU utilization is low. Ideally, AWS will decrease
another instance from our ASG. But since we specified the minimum capacity
as two AWS will not scale down your instance AWS changes your
desired capacity as for scaling policies, but it never changes the minimum and maximum
capacity values. If you edit auto
scaling group and change the minimum
capacity value to one, very soon you will see
another instance terminated. Let us not do that and keep
our two instances running. Go to the instance page
from the left dashboard. We can see we have
two instances in the running state and one instance in the
terminated state. Select any of the running
instances and copy their public IP addresses page the IP addresses of
the instance browser. You will find that
even though we did not manually run any commands
in the instance, it has HTTPD installed
and running. Let us now create an
application load balancer and see this webpage from
the load balancer URL. Click on Load Balancers. From the left dashboard, you can see that we have no load balancers in
our account as of now. Click on Create Load Balancer, select Application
Load Balancer, Enter the name the
Cloud Advisory ALB. The next option is
to choose if we want an Internet facing or
internal load balancer. Internal load balances are only accessible locally
within the VPC. Since we want to
see our website via the ALV from anywhere
in the world, we need to select the
Internet facing option in the network box. Choose the VPC and the public subnets in the
security group section. Let us create a new
security group. Add the name and description. Select the same VPC as
the EC two instances. And add an STTP rule to allow STTP connections
from anywhere. Create the security group back
in the load balancer page. Refresh the security
group option. Choose the one you just created. Alb's require something
called the target group. A target group is a group
of C two instances or other resources where ALB
forwards its requests. Let us create a new target group by clicking this link here. Enter the name of
the target group and leave the other
values as default. Click next, choose
your Instances and add them as pending below. Click Target Group, back
in the ALB creation page. Rephrase the Target Group option and choose the one
you created just now. Click on Create
the Load Balancer. We have successfully
created our ALB. Click on View Load Balancers. We can now see our load
balancer in the console. Click on the load
balancer, poppy, the DNS value and paste
it into your browser. You can see the way page and the IP address of the instance which is
serving our request. Since the load balancer randomly forwards our requests to
the EC two instances, you will see that the IP address changes every
time we reload the page. This is how you deploy web applications
in the real world. We have successfully deployed a highly available and
scalable architecture using Autoscaling group and ALB. Let us now go ahead and
delete our infrastructure. For that, go to the
autoscaling group console and delete the autoscaling
group by clicking on Actions. Delete the load balancer
on the load balancer page. Delete the target group on the target group page on
the security group page. Delete all the
security groups except the default security
group for your VPC's. That's all for this session. See you in the next. Listen.