Transcripts
1. Introduction: Hi, everyone. My name
is Marika Bukowski. I've been working in IT for many years and recently I have focused on roles as
a DevOps engineer and Cloud administrator. I've been using AWS Cloud
for a very long time now, and I decided to share my
knowledge with you today. Amazon Web Services is the leading cloud
platform globally, and it's been like that for
quite a while, to be honest, so I'm not surprised you
want to learn more about it. This training is
designed to take you from zero or very
little AWS knowledge to a level where you can
start feeling confident when working with AWS and creating AWS resources
in that Cloud. We will waste no time here. We will learn all of that
as fast as it is possible. You will see many hands on examples that you
can follow yourself, learning how things are
built in the cloud. You will learn
about AWS regions, about subnets, virtual
private clouds. You will create and
configure virtual servers. You will also see how to
configure resources in AWS in alternative ways like using AWS CLI or
using Terraform, for example, which is
infrastructure as a code tool. And if you are
completely new and want to create your own AWS account, I would suggest you watch the first four or five videos and complete them in
exact sequence as they are presented in
this training until you have AWS budget and
IM user created. I probably don't have
to tell you that, but knowing AWS Cloud is
very valuable skill that you can add to your CV when applying for IT
positions, for example. The best part is that you
need no previous experience, no IT background is needed
or any other knowledge. You just need a PC or laptop
and a bit of spare time. If you are interested in
Dobs and Cloud technologies, then please remember
that you can join our community on automation avenue.com platform where you can learn all about Terraform, AWS Cloud, Python, and many more Cloud and
Dobs related topics. That's enough of me talking. You probably can't
wait to get started. So I will see you in
the first lesson. Thank you.
2. 1 AWS create AWS account and log on as root: This is new fresh
operating system with absolutely
nothing configured, and we'll be creating
here new AWS account. So let me open the browser, and we'll navigate to
something like AWS three tier. And what actually Free Tier is, we'll talk about it later. But now, let me just
explain shortly. It's simply AWS
offering that lets you using loads and loads of
resources for a year for free. When you create new account, you can use those services
for free for one year. But we'll talk about it later when we actually
create resources. Now we're creating
the account itself. So as you can see here, I have create an AWS account. So let's click that because that's what
we're interested in. Clicking this, it takes me to that webpage with that
create free account button. So let's click on
that. And maybe let's accept those cookies. Now what I need is
some email address. It can be any email
address use or you own, you just have to be sure you can access actually that email
during the signup process. I will just use gmail.com. This is important
bit. The one below, it's just an alias
for your account. It can be anything
you want it to be called automation Avenue, but it's not that
important really. The email is important here. I click Verify email address, and now I will have to just
wait for the email to arrive. Oh, as you could probably hear, the email has just arrived, and I've received a
verification code, so I have to type it
here and click Verify. Okay, now we have to create
the password for root user. We'll talk about it a little bit later who is actually root user, but now let's just
create a password. And as you can see, it
has to include uppercase, lowercase, number, and non
alphanumeric character. Just make sure
you've got that all. Even when I just start typing, let's say capital letter, as you can see first
box is ticked, lower letter, now digit
and now non alphanumeric. As you can see, all
those boxes are ticked. My password is good
enough for them. Now I have to repeat it. And
that's it. We can continue. Just wanted to note
here on the left, you have a link if you want to explore those three
tier products. So this is the link you can use. But never mind, at this stage, we'll just concentrate
on the signup process. So I'll just click Continue. I will save the password. And now it asks me for
contact information. It asks me if it's business
account or personal account. I will treat it as
personal account. Anyway, if it's
business, it will ask you for organization
name as well. Let's click Personal,
and my name is. You have to give them name and address, and the phone number. You agree that you've read
the customer agreement. So we can go further
to step two. Now it asks you for
billing information. That's why you have to
provide your credit or debit card number
because as I said, there are some free
services you can use, but you can actually
by signing up, you can use all services, the ones that are free, and the ones that you
have to pay for. That's why you have to put
the credit card details. But in later videos, I will show you how to create budget. This budget will inform
you if you actually started using some services
that you have to pay for. For the time being,
we have to put that information in to
be able to progress. That's the credit card,
and we can go further. As I used my vote card, I have to confirm that in
the banking application, I click Confirm, and that
authorizes this step. Now it asks me for
telephone number to confirm the identity. I will use the same that I used previously on previous page. Now, the capture,
C I have it right. If it's completely messy, you
can also always refresh it. I will generate new one because some of
them are generated in such a weird way that you
can barely see what's there. Okay, now let's wait for the verification code
sent to my phone number. And as you could hear,
it's just arrived. And we can continue
to step number four. Here, it asks me again what type of account I really need. So we'll go, obviously
with the free one and want to pay $29
a month or 100. The free one is more than we
need really, and that's it. We click Complete, sign up. As you can see, it was
quick and easy process, and it says, We're Oh. Okay, as you get here, AWS says, we're activating your account, but I believe it's
just been activated. So yes, received the email, it said that I can
start using my account. So I can click
either this button or that button doesn't
really matter. Let's click this one. Go to AWS Management Console. And now it asks me
if I want to sign in as a root user or the IM user. We really only have
root at this stage, but I will show you later on in next videos how to
create IM user as well. So for the time being,
as there is no choice, we only have root user. We'll put the email address we used for the signup process. For me, it was gmail.com. We'll click next
and the password, the one that you just created
as well for this user. And sign in. Not bad. We are already in
our AWS console. That's what it's
called AWS Console, where we can do everything regarding this account
or our resources. Thing I want to
show you it's here. That's that says
automation Avenue. Remember, that's the alias we actually pasted during
the signup process. So instead of some
long weird digits, automation avenue
will be used here. For you, it can be anything you want to name this account, us. The next thing is
we can see stockhm. Well, we chosen.
I don't know why, but that's what we
call AWS region. This is where you want to
create your resources. So as you can see, you can
create it all over the world. Let me just change it to London because that's
the closest for me. But we'll talk about it later on as well. Don't
worry about it now. What I want you to
do is to activate multifactor authentication
for the root user. That's very important. This account has
to be very secure.
3. 2 AWS add MFA for root user: What you can do here
in this search field, you can put IM IAM. We'll click that
and you can see, we can see that even
the Amazon itself, there's a security
recommendation one, add MFA for root user. To be almost every
single user you create should have MFA enabled, in my opinion, but
root one is definitely one you need to create
MFA authentication for. We can add MFA here,
clicking this button. And now we have a
choice what we want to use as a second separate
authentication method. The easiest way, I think,
is authenticator app, which is chosen by
default anyways. You can call, for example, for me, it will be
my Samsung phone. I can use that as an
doesn't really matter. It's just for you,
information for you, where this authentication
will arrive. We can choose
authentication app. Click Next. And now you can see, you'll have to install one of the applications
on your foam. Either Google authenticator,
do a mobile or some other apps that the list of all applications,
you can see here. But basically, you
need just one of them. Google is really good one. I can recommend
that one. And then once you've installed
that application, you have to click that Show QR code and just scan
it with your foam. Once you scan it, you
will receive MFA code, which you will have
to paste here twice. I mean, first code
and second code. This way, you will add that device as authorized one
to receive the MFA codes. Will not do that because this
is just temporary account. I will remove it later on, but you definitely
should do that. In next video, I will show you how to create
that budget before you start actually creating
any resources in AWS Cloud. But you've got your own AWS
account now, congratulations.
4. 3 AWS Budget: In previous video, we
created our new AWS account. We created and logo
as a root user, and we added MFA authentication
for that root user. What is next most important
thing is that we should know how much money AWS
services will cost us, if any, we can either try to
not spend any money there just using free
tier eligible services, but maybe we are okay
to spend some money, but we don't want to exceed certain threshold like maybe
$10 a month, let's say. But one way or
another, we definitely want to be in control
of our budget. And that's what AWs budget
service exactly is for. Being logged onto our console, still as a root user, we can type here
in the services. We can search for budget. And we can see here budgets as of bidding and
cost management feature, or we can simply click that bidding and cost
management service, which budget is part of. So maybe let's click the top
one. Doesn't really matter. Here, you usually
see some summary how much last month it cost you, what's prediction
for this month. But because this is new account, there is no data available yet, but doesn't really matter
because what we need is budgets here in the
left down corner. Let's click that budget. And as you can
see, we can create a new budget here by
clicking this button. Now we have a choice of
using a template or we can customize, using
advanced settings. Let's stick to just
simplified version. Now, what type of template
do you want to use? By default, you can
see zero spent budget. Create a budget that
notifies you once your spending exceeds $0.01, which is above the
AWS three tier limit, that sounds good, doesn't it? Because that means if I
spend any money on anything, I will get a notification. Also, that's important
to remember, the AWS budget will not disable any resources for
us. It's not meant to be. It will only notify us every time we exceed
certain threshold. We can figure here. The first threshold
will be that $0.01. If we use any service
that we have to pay for, it will send an email to the email address we
specify here below. So maybe before we go there, there is a budget name as well. It's called My
Zero spend budget, which is okay, but let
me just personalize it. Maybe Mark. Zero spend budget. Doesn't really matter.
It's just a name. And here is where we
enter the email address. And as you can see, it
doesn't have to be one. You can put multiple
email addresses here. You just have to separate
them using commas. I will just use one, let's
say, Mark at Avenue. That's it. If I want
to another one, just use comma and blah, blah, blah. But we'll
just use this one. That's really, I
believe, because, yes, we can leave everything
ers as it is, and you can see a confirmation. You will be notified
via email when any spend above
$0.01 is incurred. That's fine. That's
what I want. So we create a budget. And that's it. We've got our budget
and don't ask me why it always shows $1
rather than $0.01. I don't know why
it is like that, but it should be for $0.01. But that's not a big deal.
It works as expected, actually, but maybe we want
to create another budget. What if we want to sometimes
use some services, but we don't want to
exceed $10 a month? So we can create
just another budget and set threshold to that. Let's click Create budget. We'll leave it as a template simplified, which
is much easier, and now we will change
from zero spend budget to maybe this one,
monthly cost budget. What's interesting about
it, as you can see, it notifies you if you exceed or are forecasted to
exceed the budget amount. If you start a new month and on the first day you exceeded $1, this AWS budget can see you will actually exceed $10
by the end of the month. So it will send you notification before you actually reach that $10 threshold because it's forecasted for you to exceed that by the
end of the month. I hope that makes sense, because our ultimate goal is not
to exceed $10 a month. That's exactly what we need.
So now we can scroll down. Let's call it $10 budget, yeah. So it's clear for us
what it is about. Now, we just adjust
this value to $10, and again, just list of emails. We want that notification
to be sent too. And that's it. And here below, you've got a summary
of how it works. You will be notified when.
Your spend reaches 85%. So if you are at
8:50, let's say, you will get email, then
it will be at 100% at $10. But the third option, if you're forecasted spent
is expected to reach 100%. But that's very handy and useful for us. That's
exactly what I need. Just create budget. This way, you can create as many
budgets as you want and get notified every time you reach
any of those thresholds.
5. 4 AWS IAM Users and IAM User Groups: In previous video, we
created AWS budget. We can monitor our spending using those budgets, et cetera. But what I want to
notice is we are still logged on as a root user, and that root user is
not really the one we want to use to create
anything like resource wise, like servers, load
balancers, et cetera. We don't really want to
use a root user for that. What we should use instead is
some type of admin account, admin user or any other user
that has some limited scope. So in this video, we will
create Admin IMuser. So the service I
need is called IM we already saw that when we
created MFA authentication, when we were creating
the account itself. As you can see, I
haven't completed that step, but I hope you did. But what we want now,
though, is a new user. If we work for an organization, we don't usually create one. We create many, and we put them in some kind of groups of users. So maybe let's start
from groups instead. That's usually the way, but
it doesn't really matter. We can start from any point, and we'll end up
in the same place. But let's start from group. So we create group so we can
create a group of users, and we will call them admins. I will be group of admins. And every single admin should have similar set of permissions, and those permissions
are configured here. And you can see at the very
top administrator access. That's exactly what we need. We take that and we
can see provides full access to AWS
services, and that's fine. We need them to access services, but they will have
limited access to view any financial information.
So we call them. Admins, and we create
group of Admins. So that's now done, but we can see we have no users
within that group. So now we can go to
users and we will create a user and place it within
that group of Admins. So let's just click Create user. Let's call it maybe
administrator. So slightly different name. Now what we need, we have
to tick that provide user access to AWS
Management Console. You can see it's optional.
And somebody might think, isn't that the whole idea
why we create that user. We won't access the AWS console. That's where we are
right now, yes? Well, yes, that's true,
but every user can have console access or what
we call programmatic access. I will show you in next video what's the difference and
how to configure that. But for the time being, we
need to access the console or Admin user should be able to access the console, just
as we do right now. We tick that,
provide user access, and then we click, I
want to create IMuser. Here we can have auto
generated password or we can have custom password. Custom, I will be
able to type it in. Now, users must create a new
password at next sign in. Recommended. This is handy if you create that account
for somebody else. You create user, let's
say, I don't know, Jack, you gave him a password,
and then you click that. So when he logs on, he will have to change
it on the first login. But because we
create this account for ourselves, I would say, let's untick it and
we will be able to use just the password
we typed in here. That's it, we can
click the next button. Safe maybe know. Now at this stage,
we can add the user to the user group we
created 2 minutes ago, as you can see.
So let's add him. Next. As you can see, if we didn't have group created yet, we can create one here as well. But because we already
have group of admins, we can just go straight to next. That's it. Here is
a little summary. User name is administrator. He or she will have
permissions from admin groups, and it will be a
custom password, and it doesn't require the
reset on the first logan. That's cool. Let's
create that user. And now we have those
sign in instructions. We can copy that
information or we can download everything
as a CSV file, and I will use this option. As you can see, administrator
credentials CSV file showed in my downloads. I can use that.
But what I really need is that URL's pretty handy. Let's copy it maybe. Still have it in
that file as well, but maybe let's copy it here. And now I will log
out from here. I'm still logged on as
a road user, remember? So we sign out we can
either log back in here. But this time as an IAM user, and we will need
account ID, 12 digits. But remember, I copied that URL. Let me open another tab. Maybe I will show you both ways. I can paste it here. Remember that long
URL, I just copy it. This actually already
has that account ID. So one last thing to type. When I click Enter,
as you can see, the only difference
is, I'm already on page where I can
log in as Imuser. I don't have to choose
that and I also have account ID
already placed for me. So now I just want to
add administrator, which is the name
of the user we just created and the password
we created for that user. And now I can sign in. But
maybe before I do that, let me just go back to this tab. And as you can see, it's a
bit different login page, but it works exactly the same. So the account ID, I can
actually copy from here. Might be easiest
way. Let's go back. I can paste it here, here, if I click next, as you can see, we are exactly in the same
place. Hope that makes sense. So because I already
have that all filled in, let me just sign in from
here. Close this one. We are now logged
in as an IM user. As you can see,
says administrator at that long account number,
finishing with 8888. You can also see one more
difference because you can see access denied
in cost and usage. As I said, IM user will have limited visibility to financial information,
and that's fine. We will create this user
to create services. They don't have to do
anything with the finances. So I hope that helps.
6. 5 AWS CLI installation and configuration: In previous video, we
created the IM user, and we are now logged
in as an IM user. It's called administrator, and this is our account number. But remember when it asked
if we need console access. This is the AWS console and it asked us if we want
to access it at all. This at first sight might look ridiculous because
somebody would say, of course we want
access to that, yes, but not every user
will need this access. AWS also has a
programmatic access, and everything we can do here, we can also do programmatically from some remote location. And instead of trying to explain that, let
me just show you. Simply, that's the easiest way. So let's go back to our
IM console, IM not IM. Let's go to the IM, where
we created our user. And as I said, it's also recommended to identify
for the IM users, not only for root user, but let me concentrate on something else because
what I wanted to show you is if we go to those users, maybe
I will click here. It doesn't really matter
where here or there. So this is our user. It's called Administrator.
Let's click on that. And what we need now is
security credentials. Let's click Security credentials
and then scroll down. And here what we're
interested in is access keys. These are those programmatic access keys I was talking about. So let's click
Create Access Key. We've got quite a choice, but we want really the first one, command line interface because that's why I will
want to connect to my AWS account from remote
server or laptop or whatever. So that's it. I just
need to confirm that I understand the
recommendations. I click next. Here you can describe
it, but I will just click Create Access Key. As you can see, we've got access key and
secret access key. So this is equivalent
of username, and this is equivalent
of password. So from remote location, I will be able to log
in to this account as administrator using
this kind of username, which is called Access
Key and this password, which is called
secret access key. And it's important
moment because this secret access key
will be shown only once. Let me show you
what it looks like. I will remove it later on. It's very important that you
do not show this to anybody. As I said, this is equivalent to your user name and password. So if anybody can see that, they will be able to
log onto your AWS account using those credentials.
So be careful with that. I will remove them before
this video is published. But what we can also do here is download the CSV file.
I will click that. This will be, as you can see,
administrator credentials. No, sorry, that was
for previous video. Administrator Access Keys is the one that we are
downloading now. So we will have that information in that file. So that's it. Let me minimize that and
let me open the terminal. So this is my
laptop. I'm at home. It can be PC or laptop or
some other server somewhere. How can I now access that
AWS account from here? So what I need is AWS
CLI, it's called. Okay, we need AWS CLI, but if you type, for example, AWS version, there's very slim chance you already have AWS CLI installed. We usually have to
install it first and configure it.
So let's do it now. Let me just go back to
our browser and let's Google how to install AWS CLI. And now I'm on two, but you might Google for any operating system you're
on. So let's check that. First link, let's scroll down. Here we've got the instructions. If you're on Windows,
you would use this for Macosd but I'm
on Linux, as I said, well, it's not even normal
Linux, it's RM version, so I have to switch here as
well to RM version of Linux. And what I really need is to
just copy those commands. So I can click
those two squares, now copy it and go back to my
terminal and just paste it. Enter and that's it. So now, maybe let's clear it. So now if I type
again AWS version, as you can see, now,
I've got it available. Okay, so how to
configure it now. Fortunately didn't
make it complicated. It's command AWS configure. Click Enter, and now it asks
you for that access key. This is the one, remember
this is your access key. You can copy it
here. Let's go back. We paste it. Enter,
now secret access key. We have that as
well. We copy this. This is equivalent
of your password. Let's go back, paste it here. What now it asks you for
is default region name. I know we didn't talk
about regions yet, but region basically
is where you want to create your
resources usually. Depending on where you are,
you will choose your region. For me, it's a London, so it's Eust two. It's not really something
you have to type in. You can leave it blank,
but then you will have to specify every single time
you create something. You have to specify which region this resource has to be created. So for me, because most
of my resources will be in EU west two, sorry, did they say one
or two? It's Eust two. But yeah, because
EUS one is island. So I want everything
in London, yes. So it's EU West two. But this is something
you will be able to override
later on anyways. So it's just for
your convenience, but you can leave it
blank, as I said. If you have default one, you
will not have to type it, but you will be able
to override it. Okay, so enter and
output format. Well, not bothered right now. Now, I should be able to access
my AWS account from here. So maybe let's start
with AWS help. It will show you all of the services we can access
using that command line. And as you can see, there is basically everything you
can access in Console. You can also access it via CLI. And because we don't have any
resources might be tricky, but we can use that IM because
we've got user created. AWS IM and then
maybe help again. See what possible commands
we have for AWS IM. This one looks okay. List users. Okay. List users. See
what we've got here. All right, little hiccup
because this is new system. As I said, there is
nothing installed here. It says, No such file
or directory less. Well, my guess is because
we have to install less. Ah okay. So do up
to install less. Okay. A little hiccup. Let's try it again. And as
you can see, now it works. It knows we've got
user administrator. It gives us the user ID and some more information
like when it was created or when
password was last used. This way, we can access our AWS from our laptop
using common line interface.
7. 6 AWS EC2 in AWS and Free Tier and SSH keys: Okay, here I am in
my AWS Console. I'm still logged on as
IM user administrator, and the service I need
is called EC two, and you can see services EC two, virtual servers in the cloud. That's exactly what
we need. I clicked on it and as you can see, instances running
zero because I don't have any virtual servers
yet. Let's create one. To create one, we can click on that orange launch
instance button. I'll click that and now we can specify all the
details for our server. First is name and name
is really not important, but let's name it Marx server. Doesn't really matter
what you name it. It's just, you know, so
you recognize it's yours. And we scroll down
and we can see operating systems that are
available for our server. Amazon Linux is
picked by default, and Amazon Linux is like a Linux Fedora based system with a lot of tweaks from
Amazon, and that's cool. But we can also have MacOS. We can have Ubuntu Windows. They're all called
Amazon Machine Images. If you click this button,
you will see there is thousands and
thousands more of them. But let's just stick
to the basics, maybe. I will just pick Ubuntu. What's important about
Ubuntu or Amazon Linux, you can see it's free tier eligible and I will talk
about it in a minute. For now, maybe I will
switch to Ubuntu, which is also free
tier eligible. But that's fine
for now. We'll get back to it. I scroll down. Here I can choose
architecture of my processor. It's 886, it's usually
Intel or AMD or RM. Is AWS has their own processors. They're called gravitons. You can use that
if you want. But I will just stick to the X 86. Here, you can choose the
type of your server. We've got Tito Micro. As you can see Tito Micro. Well, let me just
maybe open this. You can see there's many
and many of them available, scroll down as you can see
lots and lots of them. But what's important about
this one is that it has also that free tier
eligible label. Let's it. See this server has
one virtual CPU and one gig of memory. Below are the prices, how much it costs to run it. But I want to discuss
one more thing because we go back to it to
that free tier I mean. Let's scroll down for now, and we've got keeper. Keeper is SSH key
that we can use to connect later to our
server from remote location. Like this laptop from
my laptop, let's say, if I wanted to
connect to my server, I will need a keeper. As you can see, I don't
have a keeper and it says, I can proceed without a keeper, but it's not recommended. That's not really what I want. I want to create a new keeper. I'll click on that buttom
and I can call this keeper, whatever, Marx key
maybe. You know what? Maybe here we will change
to that ED 255, blah, blah, blah, because this is the newer and better version of that key. But to both of them
will work fine. Now we'll just click
Create Keeper. As you can see, it also
downloaded automatically. I will see it in my downloads, the Marx underscore
key dot p. That's the key I will need to later
connect to this server. As you can see, it's
now picked keeper Marx key. So heappy with that. Now we've got network settings, and I don't really
want to discuss network settings now because
this is a very broad topic. But if we leave
everything as it is, this server will work exactly as I want it to work anyways. So we just want to be sure that auto assigned public
IP is enabled, and that creates
security group has this allow SSH traffic
from anywhere. This way, we will be able
to connect to our server. If those bove are exactly like here, that's
all I really need. So we can scroll further. So we've got configured storage. This is the hard
drive for our server. And as you can see, by default, it has eight gig of what they
call GP two root volume. GP two is older
version of SSD drives. We can switch that to GP three, is newer version, and now I really want to go
back to that free tier. As you can see, it's
also here because the amount of storage we use will also affect that free tier. And as you can see,
it says I can have 30 gig of EBS storage if I
want to be in that free tier. I will change that eight to 30. You can leave it eight as well, but I can have up to 30, so I will change it to 30. Why not? I want to summarize it. If you've got up to 30 gig of GP two or GP three
root volume chosen, and then if you
have instance type that is free tier eligible. And if you have operating system that is
also free tier eligible, then this server can run for 750 hours every month
free of charge. And the next month and
the following month, the amount of hours
will reset and you will have new 750
hours for that new month. And if this is new
AWs account and you've got one year of
free tier eligibility, then this server can run for a whole year completely
free of charge. It will not cost you
anything as long as you will not exceed any
of those mentioned here. All right, so let's go back. And, well, that's it. There's nothing
else I need here. I can just launch instance. So I launch instance and my
server is being created. And it's now up and running. It's a successfully initiated
launch of the instance. I can click on that
identifier for my instance. Instance means Virtual
server. If I click on that. I can see Mark server, and the status is initializing. That means it's not entirely
full up to speed yet, but it's being created. Now if I click that button, you can see more information
about that server. And one of the most
important ones for me right now is my public
IPV four address. Public address means
I will be able to connect to that server
from anywhere in the world. Let me maybe refresh that first. Let's see if this,
still initializing. That's fine. But even
though it's initializing, you can try to connect
to it already. I can click this
button here, connect. And I've got quite
a choice here. First, way to connect
to my server, it's called EC two
Instance Connect. And if I click this bottom, this AWS Console will take me to that server and then
log on to the server. I can run commands now. For example, DFH is a command that will
show me my root volume, that hard drive
that we attached, and it says 29 gig
and 1.6 gig is used, so I still have
28 gig available. That makes sense because
we created 30 gig volume. So if I go here again, can go to storage, and we can see it actually is
30 gig in size. All right, so that's how it's done locally from AWS Console. But what if I want
to connect to it from my local terminal
on my laptop? Let me just resize that. So I can tell me now. This is my laptop. This is terminal on my local
laptop here at home. I can still go here, click D Connect, but now
I've got some hints. If we go to SSH client, it tells me what I
can do to connect remotely from my
home to this server. It says, open SSH client. Well, my terminal already
has or my system, I should say, has SSH client. Next thing I have to do
is locate my private key. Remember that key
that we downloaded? I was downloaded
automatically at maxkee dot p. You can
show it in downloads. Yes, it's in downloads. Here I have to navigate
to my downloads folder. So I see the downloads. And I can see this is the file that was
downloaded, Markky dot pm. All right. So what
I should do next, I should run this command. Change mode 400 Marx key. I can click on those rectangles
to copy that command. Go back to my 1 second. Let me make it bigger,
and maybe clear that. All right. Now I can paste it. And decenter. What
I should do next, connect to your instance using public DNS and example is here. I will click dot
command ahi Marx key. Past it here and this should
take me to my server. This is just standard warning. Asks you if you are sure you know what
you're connecting to. This server is not known, so it asks you if you're sure you know what
you're connecting to. But I am sure because
that's my server, so I types, Enter, and now I'm also on my server. If I type DFH, you can see exactly that this is my server because it
has 29 gig hard drive. I've got 28 gig available. That's exactly what we
saw just a minute ago when I connected locally
using AWS Console. But this time, you know, I can minimize that just
to make sure it's clear. I'm connecting from my
laptop that I have at home, and I'm connected to my server that is in AWS Cloud somewhere. And I can run some
these devices. You can see this is my drive. You can run like what HTp. You can see the CPU utilization, memory utilization, and
all that cool stuff. And if I run cut, let's see, Os release, I can see it is indeed Ubuntu
operating system. Okay, that's how to create the server and how to connect
to it from remote location. If you stop playing
with your server, I would suggest you also destroy it here because
once you destroy it, you also save those
hours on that free tier. Remember, you have 750 hours. But if I now destroy this instance, instance
state, terminate. Yes, terminate. I'm removing
this server, as you can see, it's shutting down
because maybe tomorrow, I want to create two servers. And if total hours for all of my servers do not
exceed 750 hours, I still will not be
charged anything because that 750 hours is overall amount for all
services that I run here. So I can run one server continuously for a
month or I can run two servers continuously
for let's say 15 days or maybe three
servers for ten days. None of that will
exceed 750 hours. So I still will not
be charged anything. Hope that makes
sense. All right. And I will see you in
the next episode. Bye.
8. 7 AWS EC2 in AWS using AWS CLI only: Today, we will accomplish
exactly the same task, but we will use
AWS CLI for that. So let's open the terminal first with the terminal
here, and we can check. If we have AWS CLI
installed at all. Version, we've got version two, 15, 24, as you can see. The very helpful command
when we try to start building our commands
in CLI is help. Simply after typing
something like AWS, we can always type help, press Enter, and that gives us all available arguments
to that command. Here we've got, as you can see, everything you can
configure in AWS console, you can also
configure in AWS CLI. And here we need EC two. That's exactly what
we need because we want to create EC two instance. I can type Q to exit. I can type again AWS EC two, and I can type help again. And this will give
me help or list of available arguments
for AWS EC two. As you can see, loads
and loads of them, and the one I really want. Well, you know what? I
will not bother with that. I just wanted to point out
that this help is really useful when you want to start
building your commands. So the command I need to
create EC two instance is AWS EC two, run instances. And then I need to
pass some arguments. That's what I mean.
It's very useful if you created already the
EC two instance in AWS console because now we can scroll through those options and we can pass them as we go. You will see what I mean.
First, we choose Ubuntu, yes. Ubuntu is Amazon
machine image and each Amazon machine image has AMI ID. You can see it here. I can copy that. I can pass
it to my AW CLI as image ID. Then I can just paste it. What it says really is that
our EC two instance will have that Ubuntu as operating
system. But you know what? There is one thing
I want to change because typing everything in one long line is not really
does not look really good. What we can do, we can
use the backslash. If we put backslash
here and click Enter, you can see a little
triangle here. That means this command will
be continued in new line. You will see it will look much better when we use
those backslashes. So image ID of Ubuntu. Now I can type space, backslash, and click Enter again and we can continue again on new line. I think this way, it looks
clear better than as one very long line
because it would be very long because that's not the only argument
that we need. If we go back to our console, that was only first argument. First thing, we decided what
our server will run on. But next we had instance type, and we chose Tito Micro. We chose it because it
was free tier eligible. We want the same type
of instance when we created using our AWS CLI. We can type here Instance type. T two micro and backslash again. What else did we have here? Scroll. Further,
we had the keeper. Remember, we created one
because we didn't have any. It was called what? Marx key? We can add that as
well to our command. I believe we need
those quotation marks around it and I can
then use the backslash. Again, What else did we have? I mean, we didn't change
anything in network settings, and the storage, even if
it's eight gig, that's fine. That's basically
roughly what we need. What we can specify, though, is the region where we want
that server to be created. Because remember, we've got one here and I've got Eust two, and this is also what
we configured by default for our AWS CLI to use. But just for clarity, we
can add that as well. Region EU West two. And that's all the
basic information I really need to
create our server. If I go to all of the services, if I go to EC two, you can see, I have no instances
running right now. If we go there, you can
see no matching instances. I've got no running
instances right now. But what I will do, let's
take that terminal. Maybe now I will
make it smaller. I will click Enter
and we will see in the background that
it is being created. Enter. And after a while, you can see it says instance ID. If I go up and refresh, well, it's still no matching instances because it's not
fully running yet. But if I keep refreshing, you can see now the
instance is running, but the status check
is still initializing. That means the server
is nearly ready for us. If I refresh again, while
still initializing. But regardless, we can see the instance ID
finishes with 9360. That is indeed our instance ID. So what we have in
the console and in the AWS CLI kind
of matches, yes, we can see instance type is Tito micro and key
used is Marx key. That means I should
be able 1 second with me of here. Maybe
I will clear that. We can try to connect to that
instance using this key. I have example here. My key
was in downloads, actually. So this is my key, and
we will bring it up. Yeah. So this was
my key, marxke.com. And if I run that command,
if I pass that command, I should be able to log on to my server until we'll
wait for a while. Oh, come on. What's going
on? Let's try again. Sorry, it's may be in C.
If we control, see that, we actually omitted one of the stages. So let me show you. If we go to security, we didn't specify the SSH
port to be allowed in. This is the security group, and that's what we did
when we used AWS console, but I forgot to di it here. Never mind, we can change
it edit inbound rules. We can add rule. SSA from
anywhere can save rules, but we can see now
SSA is now open. Now if I go back, if I now
run that command third time, well, yes, and it
will let us in. Vt. That's what we also
run in previous video. We can see our drive is here. It's eight gig in
size this time. But we can specify all those values and we can specify also that security group using those arguments that are
made available for us. If I do AWS EC two
run instances, Hope. We'll be able to find them all. As you can see, security groups, some fancy things like kernel ID and loads
and loads of stuff. And here below, you've
got exact explanation what each of them will
do and how it will work. But anyways, I
will just quit and another command we can
use is, for example, AWS, EC two, describe instances that will tell us a lot of information about our instance. I can use the backslash again, and I need instance ID or IDs, I think it is plural and our ID. I mean, ID of our instance. Is this I mean, it's not what I really need. If I didn't specify that, that would show us all instances available in given region. Oh, yeah, region is actually the other thing
that I need region, U west two, and I
also need sorry, I kind of st it up. Maybe, let me clear that. I need AWS EC two, describe instances. Now I need instance IDs, which is that Mcurce again
and region U west two. Now when I click
Enter, it will show me all information
about that instance, and as you can see, it's quite a lot because
if I click space, you will see more and more information
about that instance. As you can see, this
is private IP address. This is public IP
address. This is DNS. If I click space again, we have root device, we have security group ID, VPC used, everything, all the information that you can
find here in all those tabs. You also have them
available here in CLI. So it's very long list with all possible information regarding this
particular instance. Now I click Q again, you know what we can do, what we should always really
do at the end. When we finish playing
with our instance, we should terminate
that instance. As we can see now the
instance is up and running. We'll go back to our terminal
and we can type AWS. Easy to terminate. Instances, it's also plural
and then instance IDs. Instance ID is here. I can paste it
back search again, region EU west, two, and now when I click Enter, this instance will be terminated.
It says shutting down. If we go here and refresh, let me remove that status. As you can see, the instance
indeed is shutting down. If I refresh it, now the
public IP disappeared, but a little bit later, it will be O. I
already is terminated.
9. 7a AWS regions 3 completed: Know how to create EC
two instance manually, and we know how to create EC
two instance using AWS CLI. But we omitted a big chunk
of information here, and it's around those
regions here and the networking configuration
for EC two Instance, not only for EC two, but for
any service, to be honest. Let's start with those regions. What are they? And why do we
have to choose one, even? Maybe let's go to Google first, and let's just search
for AWS regions. Got something called global infrastructure regions and ACs. That's what I'm interested in. We can see all the regions available and
availability zones, but we will talk about
availability zones later, and these are regions
for North America. I personally in Europe, we can just click here Europe and we can see regions
for Europe as well. AWS started in North America, and when it started, it was just an internal
for Amazon service, not even available for
external customers. Then they decided they
will start sharing their infrastructure
with other people, with other businesses. That's when they
really started to grow and when they started
creating new regions. We've got here a region
called Canada West. We've got Oregon, we've
got some government Cloud. And if we go to Europe,
here, for example, London, and that's the region I usually use for my infrastructure. But we also have
Ireland and some other, and we can also see coming soon. Which one is that? It's Germany. But we've got here Frankfort,
it's already launched. Well, it was launched ages ago, but we can see when new
region is coming as well. If I go back to my AWS account,
you can see all of them. As I said, I usually
choose London, but we can see that island. Please note the
difference though. London is just one
city like Paris, but Ireland is entire country. What that really means,
if we go back to that map is simply
AWS managed to create three or
more data centers in that very specific
location, which is London. Or around London, but very close in close
proximity to London. So they probably have three or more big data centers that can be completely
independent. I mean, they have
separate power supplies, different separate Internet
connections, et cetera, and that bunch of
those data centers are interconnected with very fast, secure internal connections. So they can act as a one thing. But if one of data
centers go down, the other data center
can take over or simply can continue working
as if nothing had happened. In UK, they managed to do
it just around London. It doesn't have to
be inside London. In fact, I know one of the data centers where it is
and it's outside of London, but it's very close proximity. In Ireland, though, even though it points
roughly at Dublin, they probably have
one data center that is much further than
Dublin. I'm just guessing. That's why they
called that region Island rather than Dublin, rather than being more specific. But basically, region is an area where they
have they, I mean, AWS have pretty robust
infrastructure, consisting of three
or more data centers that can work independently. That's all it is. And how
you choose your region, you usually choose the
closest location to where your customers live or where your
customers are located. But that's not always the case
because maybe you have to consider some GDPR rules
or some other aspects. Like even UK based businesses, they often choose Ireland
if they want to focus on European Union GDPR rules rather than on specific
United Kingdom rules, which does not belong to
European Union anymore. But choosing the region is
always about your customers. You have to think what's
the lowest latency, what's the closest proximity, and you have to consider
all these other rules, and then you decide on which region to choose
to serve your customers. That's basically it. That's all you have to know about regions.
10. 7b AWS VPC 3: Now let's talk about VPC. VPC means virtual private cloud and when we were
creating our instance, let me maybe create one. Launch instance,
we chose our name, we chose our AMI and then if we scroll down
in network settings, we omitted that entire
sections, but look at that. First item is VPC. As I said, VPC means virtual
private cloud and it's basically area or
your private cloud where you are going to
build all the resources. But what is that? We haven't created any S. Well, the fact is in each
region, like here, Europlon in each of the regions, AWS created a
default VPC for us, and this is the default VPC. It finishes with 746 and that VPC is built
for us per region. What that means is that if I duplicate this
step maybe, well, I have to go back first because instances because I want
to switch region to, let's say, I don't
know, North California, whatever, something
different than London. Now we are in the United
States, North California. If I launch instance here, we again have the
name, blah, blah, AMI. You will also see that AMIs
will have different IDs, but that's not what we
are talking about because they are also region dependent. We scroll down to network settings and
have a look at that CFF. That VPC has ID
finishing with two CFF. If I go back, my London VPC is f746, completely different. And if I try to edit it, I will also see another
interesting fact. It's look at the
subnet 172 30 100. If we go here, they
edit the one in North California 172 30 100. So basically, we've got the same subnet
allocated for both, but they are completely different
virtual private clouds. Virtual Private Cloud is something you
configure per region, but you are not limited
to one VPC per region. You can create multiple
VPCs per region, but basically all the
resources you create, they will have to
be within that VPC. I mean, with some exemptions, you also have something called global services like
Route 53 or CloudFront. But except of the
global services, everything else will
have to be within VPC. And how do we create a VPC? If we go back here, maybe I don't want to use
that default VPC, as it even says, default. I don't want that default VPC. Let me duplicate the tab again. I can search for
service called VPC. They don't call it virtual
private Cloud anymore. Call it isolated
Cloud resources, but that's exactly
what it means. It's your private Cloud. If I go to VPCs, we can see we've got
one because we've got only that default one that
Amazon created for us, and this is the subnet, but I can create another VPC
and that can call Mark VPC. Cider IP subnet or prefix
can be completely different. Maybe ten, 000 slash 16. Now I create VPC. The VPC has been created. Now if I go back to instance, I refresh using this arrow. Now I have a choice between
default and Mark VPC. Maybe I want to create
it in that new Mark VPC, which is also in region London.
11. 7c AWS Subnet and AZs 4 completed: Now we want to talk
about subnt because that's our next position on
the networking settings. We talked about VPC.
Now we've got Subnt. But I also want to talk about something called
availability zone. And we already saw
availability zone when we saw that
map with regions. Actually, it's called regions
and availability zones. The fact is AWS had older
version of this map, which I liked more
because it would show you clearer what it is,
what it looks like. Because now when you hover over it will show you
availability zones three. But the previous map was a
more accurate. Never mind. What I mean, Region London
has three availability zones. But if I go, for example,
to North America, and if we pick what Oregon, we can see availability
zones four. But if we go to
Northern Virginia, we can see six
availability zones. What does it mean? You remember when I talked about
those regions? I said a region is
created when AWS builds three or more data centers that can work completely
independently. It will have different
power source, different Internet
connection, et cetera. Each of those data centers or
it doesn't have to be one. It might be a bunch
of two, for example. Usually, 1-2 data centers, they will create one
single availability zone and then another data center that is completely independent. We'll create that second
availability zone. Then when they create third
data center or couple of data centers that are able to create
availability zone, which means they can act
completely independently, those two data centers or one. Once they have three
availability zones, they can call it a region. But what does it have
to do with the subnets? Looks like availability
zone is something regional. It's not a networking thing. Well, it is because if we
go back to our instance, if I click now on Subnet, well, it shows me no subnets found because we haven't
created one yet, but I can go to
that separate tab. We also have subnets, and we can see already three subnets that
are available for us, but they are default
AWS subnets. That's something
AWS created for us. But let's maybe
forget about those, and I will create
new Subnt because I want to create that
new subnet in my VPC, in my Marek VPC that I
created in previous video. Now I've got subnet settings.
I can choose my name. Mark subnet. And what is
next availability zone. No preference means one will be allocated for
me automatically, but I can be more
specific and I can choose exactly where I want this
subnet to be located. So what that means,
that means this networking setting really binds me to some physical location where AWS has their data center. As I said, London has three ACs, so I've got three
possibilities here, A, two B, and C. These are my availability zones that
I'm able to choose from. Why is it important at all? Some services like EC
two, like your servers, if you create them all in the same subnet means in
the same availability zone, you will not pay for
cross AZ transfer. That's what it's called.
If, for example, you created one server AZA
and second server in AZ B. And if you want
those two servers to communicate with each other, that would mean you
will have to pay additionally for cross
AZ traffic because that network traffic has to
leave one data center and travel through AWS
internal network to the other availability zone. And even though they
are in close proximity, that can incur
additional charges. You are creating a subnet
here, but also, well, I can't leave it as it is because I have to
choose decider as well, which is IP prefix for my new
subnet tent 010, maybe 20. Now I can create my subnet. And that subnet is now created. It's part of VPC Mark VPC. But if I take that, I also bound this subnet to the
availability zone US two A.
12. 7d AWS security groups 4 completed: Okay, I created the VPC. I created my subnet now, so I have to refresh first. If I refresh, I can choose well, it's Joseph automatically
because we've got only one. We created Max subnet, but the next network
setting is firewall, and in the brackets,
it's a security groups. This is kind of
self explanatory. I mean, the name they chose
is pretty good, I would say. So if we scroll further, we can see we can create
security group rules. What that means, I can
choose the ports that will be allowed to connect
with my Esto server. And by default, SI
is already chosen. It says the protocol
is TCP port is 22, and source type anywhere. That means I will be
able to connect to my ICT from any IP address. But I can be more specific. I can, for example,
choose Custom, and maybe I want to only be able to reach
from, I don't know. Maybe my address is 33445566. So I only will be
able to connect to my ICT server from this
particular IP address. So basically, what
a security group is is a firewall around
your ICT instance. And you've got inbound
and un outbound rules. What very often happens, what it looks like, you
only allow inbound, very specific port from
very specific IP address, but outbound allows all traffic. I mean, that's not always the
case, but it's very often, that's something you
will very often will see even in production
environment. Because how that
firewall works is if your server initiates the
traffic, let me repeat. I server initiates the
traffic on any other port, so that traffic goes out, it will be allowed in. Even if inbound rules would not allow this
traffic on that. Traffic will be
allowed because it was initiated from the
server itself. That's how firewalls
work anyways. The inbound security rule is only for traffic that
is initiated outside. So if I want to connect to
my server on port a sage, but from different
IP address I chose, I will not be able to. It will be prohibited
or dropped simply. But if my server initiates the
traffic on any other port, maybe on port 443,
which is HDDPS, then that traffic automatically will be allowed in
for that session. Yeah, that's all about
security groups, about the firewall rules. They are basically firewall
rules for your server.
13. 7e AWS S3 bucket 4 completed: Now we want to talk about
very popular service called ST bucket. ST means simple storage service, and it's one of the
oldest services available on Amazon
Cloud on AWS Cloud. So let's search for ST and they rebranded it to scalable storage in the Cloud, but never mind. Let's click on that ST. We've
got some buckets already. I can't remember
what they were for, but we can create new bucket. What's that bucket? It's simply a storage like a placeholder, where you can store your files. I create a bucket and it's interesting service
because it's regional. You don't choose availability
zone for your ST bucket, but you choose the region. As you can see here, region ES two because I'm
currently in Europe London, that region is automatically picked up by this
user interface. Then the bucket name is also interesting because
it has to be unique. If somebody else already
owns that bucket name, you will not be
able to create one. If I, for example,
want to create a bucket called
Automation Avenue, and I will ignore all other
settings, create bucket. It takes me back.
It says bucket with the same name already exists and Diet does because
I already own it, but in different account. I have to choose something that is unique, globally unique. Maybe Mec test bucket. That should be fine. But what I also wanted to say is
the bucket versioning. Simply the bucket
is the place where you can upload your files. But if you enable
the versioning, if you upload the file with
exactly the same name, it will keep all the
versions of that file, which means you will
be able to choose which version you want
to download later on. If you want to choose the
old version or new version, you will have them all
kept in this ST bucket. You will not overwrite it. You will simply put new version on top of the old version, but they will all be
available for you. I hope that makes sense. And
then you can encrypt it. By default, it is encrypted. Now if I create the bucket, it should be successful. And it is. Look at that Automation Avenue,
Mark test bucket. So now if I go there, if I
want to upload something, I can add files or folders, maybe add files, maybe
something from downloads. I don't know what this is, but let's just say open. If I click Open, that file will be
uploaded to my ST bucket, and I can see it here, code
dot TXT, whatever it is. And if I want to
remove this bucket, I have to first remove everything that is
within that bucket, which means I have
to remove this file. Then I can go back to my bucket. I can click on that
and I say delete. But I also have to
type all that stuff, just a warning, if you really
want to delete it or not. We'll paste it like that.
Now I can delete my bucket. Very useful and you
will use it a lot, but also as you can see,
it's very simple service.
14. 8 AWS EICE: Today's video, I
want to talk about an AWS service called EC two
Instance Connect Endpoint. It's a pretty new solution, but still not many people
are aware that it exists. I wanted to show how we can use that Instance
Connect Endpoint, that service, so we can
securely access our servers. First of all, what it is
for? What does it resolve? When we were creating
our servers, we could only access them from remote location like
our laptop at home, let's say, we could only access them because
of the fact that we chose in networking
settings option to have public IP assigned. So we could then
run a command on our laptop like SSH,
Mark at whatever, and it worked because if a
server has public IP address, it will work because
public IP addresses are available from
anywhere in the world. The problem is that
we do not always want to assign that public
IPs for our servers. Sometimes we don't
want to expose our servers to anybody
else except of us. So there were
solutions for that. One of the solutions was to
use Bastim and till now, it's still valid
working solution. And for those that
don't really know how that Bastion or jump
box solution works, we simply have to
build another server. We usually call it as I said, Bastian or jump box, and that server will be configured with
public IP address. And then we can jump
through that box to access our server that is behind it and it has only
private IP address. If our private server has
private IP of 10101010, let's say, and our Bastion has public IP of let's say
999999, only nine. If we have user Mark, for example, configured
on both of them, and if we have SSH key working, then we can use a command like SSH A J J to jump
through something. Then I can specify the username and IP address
of the Bastion, and then again, username and IP address of the private
server that is behind it. This way, we can jump
through that first box with a public IP to reach the second box that does
not have public IP, but they will communicate
using private IP address. This kind of works,
but that means that we have to pay for two
servers now instead of one, and it also means that this Bastion or
jump book server is exposed to all kind of
attacks we can think of. That's why AWS came with a direct replacement solution for that Bastion type of access. They said, You know what?
You can create that EC two instance connect
Endpoint service, and then you can use your access key and
secret access key to be able to jump through that endpoint service as
if it was your Bastion. That's perfect for us because
we already created our access key and secret access
key in previous material. So maybe now let's jump
to our AWS console, and we will set up
the endpoint and we'll see exactly step
by step how it works. So you can follow and
set it up yourself. So first of all, I'm
in IC two Console. As you can see, I
have no instances, so let's create quickly one. But this time with no public IP. Doesn't matter what's the name, I will just call it Marx. Can use Ubuntu instance type T two micro as it's
three tier eligible, we configure that a few
times in previous videos, so you can go back and
watch those if you have never configured EC two
instance in AWS console. The key pair, we still have
the Max key. We can use that. The only difference here is that Auto Assign IP is enabled,
Autoassign public IP. So we will edit that and the
autoassign will disable now. So we will not have
public IP address. Okay, and the rest is fine. We just launch
instance. We'll wait for a few seconds already done. If you go to that instance now, we can see I know it's pending, but we can see there is no
public IPV four address. Even if I refresh or click this box to
see all the details, you can see private IP address, but there is nothing in
public IPV four address. Now if I wanted to connect, this command will not work from my laptop because I can't
reach that private IP address. I can prove, but so here
is the private key. And if I paste that
command and press Enter, nothing will happen because
my laptop will not be able to reach IP address that
is private and not public. So I control CA. We will need that
intermediate step, that replacement of Bastian. We could use, we could create another
server as a Bastion, but today we'll concentrate on that alternative
solution from AWS. Where we have to go,
we have to go to VPC. VPC is isolated cloud resources, as you can see,
virtual private Cloud. This is the service
that we need right now. What we really need
here is the endpoints. We click on those endpoints
on the left side, and as you can see, no endpoints found, and we have
to create one. Create endpoint, and the type is EC two Instance
connect endpoint. I can name it if I want. It's optional, but
will be Mark endpoint. I have to pick the VPC,
but I have only one, so the default that Amazon
created, we can see default. I believe the security
groups we can omit, and we can pick
the subnets where we want that endpoint
to be created. Any subnet we will pick here, it will work for us anyways. It's only sometimes if you
already have some instances, well, okay, let's
do it properly. Let's go back to
those instances. I click that instance again, and I can see
availability zone two C. But I will see the subnet, it's finishing with 591. So I will pick the same. This one finishes with 591. And now I can create
that endpoint. It took only a few seconds, and it's up and ready for us. What we can do now, I
will go to my terminal, maybe clear that,
make it bigger. First of all, we need pretty
recent version of AWS CLI. So if I run AWS version, as you can see, I'm
on Version 21524. But what I'm really
interested in is if I have AWS EC two instance connect, if I will have open
tunnel option available, and I can check it
by running help. So it's AWS Instance
Connect help. And I can see
available commands, open tunnel because
we will tunnel through that EC two
instance connect endpoint. If you have other AWS command
line interface installed, you will not have this
option available. So you simply have to install
newer one. I will quit. And now I can try to connect to my instance going through
that endpoint service. So let me go back here. So let's go to instance. Let's copy that command again. I know we've just run
it, it didn't work, but we will now add
something else. I will type O. Proxy command equals, and
now it will look awful, but we will streamline it
later on, ignore it for now. Proxy command will
be AWS, EC two, instance, connect,
and now open tunnel. That's why we need
that newer version of AWS CLI because we need
that open tunnel option now what we need is instance ID and the ID of the instance we
want to connect to. We've got it here
that's the instance ID. All right. We'll paste it here. That has to be in
quotation marks, by the way. And now
we press Enter. And as you can see, it takes
us straight to our instance. If you're on DH, that's a gig disc that
I've got available, and as you can see.eleven.186, that is dot 11 186, that's the private IP
address of our server. We can now tunnel through
that EC two instance Connect endpoint to reach our server that does
not have public IP. But let me exit from here and you know what that
command, it looked awful. I didn't. Let me control C, maybe I will clear that. I will show you how to
make it look much better. If we go if we use TLD, that's our home directory. I mean, my home directory, home parallels, we've got
here the dot SSH folder. If I go there cd dot SSH, I should have some files here. All right. I've got something,
but I will add a new one. It will be called Cofig. I will create one vim Config. And here I can configure how I want to SSH to basically
anything really. I will put host. You will say in a
minute, what I mean. Host, I can call it whatever. Let's say XY z. User, I will need all those details that
AWS already provided. So user will be Ubon will
copy that. Host name. That will be our IP address.
What else do we need? We need our SSH key. I can put here identity file, SSH calls it, and I have to specify the path to my SSH key. So it was in my home
directory in Downloads. And what it was called. So downloads, you
can see downloads, Mex underscore key dot p. Yeah. Something like that.
The last thing, it's not here, but that was
actually the longest part. It was that proxy command. So I can proxy command, can use that configuration
here as well. It was AWS EC two
instance, connect. It was open tunnel because we had to tunnel
through that EC two instance connect endpoint and the tans ID was the thing
we also needed is this. Oh, by the way, sorry,
because I go any further, as you can see, no public
IPV four assigned. With no IPV four address, you can't use EC two
Instance Connect. Alternatively, you
can try connecting using EC two Instance
Connect Endpoint. Look here, this is
the service that would let us connect
to our server as well, from the AWS Console. And this is the identifier of our EC two instance
Connect Endpoint. So there are two
different things. Es to instance Connect and EC two instance
connect Endpoint, but we don't want to
connect using AWS Console. We're connecting from my laptop. So I will just copy
that instance ID, but it's worth to
mention as well, yes. All right, but I will
just past it here. That's my instance ID, and now I can escape
column WQ write and quit. So now if I do CAT config,
this is what we configured, and now I should be able to SSH using just that name
that I provided. It can be whatever you
need, but I called it XYZ. So let's try now. It takes us to our
instance as well. But this time we
don't have to type that very long awful command. All. That's really all I wanted to say
about that service. Just wanted to mention, remember to always remove the instance. When you stop playing with
that, just go back to the instances and
just terminate it. So it doesn't cost us any money. Regarding that endpoint,
we can leave it here. It doesn't cost us anything
anyways, so it can be there.
15. 9 AWS Website with AWS LightSail: Again, in today's video, we will create our
own website in probably the easiest and
the quickest way possible. My guess is that this video will only be few minutes long, but at the end of this video, you will have your own
website up and running, and you can then personalize
it further anyway you wish. Our website will also
have public IP address, so you will be able to access it from anywhere in the world. Let's jump right
in. And the service we need is called Light Sale. So what we need first is to search for that
light sale service. And we can see launch and
manage virtual private servers. Sounds similar to EC two. Well, it is EC two actually
in the background, but you can see the difference
nearly immediately. First, we can pick our location, where we want our
server to be created. London was picked for me,
but I can change it here. If I want one in
Mumbai, why not? I can have it in Tokyo, but maybe let's stick to London as it was chosen by default. My server will be in London and the website, and
then we scroll down. We can choose the platform. We can choose Microsoft Windows, but notice you will
have only system, you will have no
programs installed. But if you pick Linux or Unix, you can see there is
plenty of applications available that can be installed together with
the operating system. You can see the WordPress. It's very popular platform
to create web pages, but there is also a Juma like very popular nginx web server. There is also Magento to
create ecommerce websites. There is also Presto shop because maybe that's
what you need. Maybe you want to set
up an online shop and start selling stuff online. But I will choose maybe that
default one, WordPress. So we will have Linux installed and WordPress
at the same time. Then we can scroll further. And honestly, I don't think
we need anything else here. Plus, as you can see, you have first three months
available for free. So any server you pick
out of those three, the one for $3 $55 or $10, first three months are for free. But honestly, even after
those three months, $3.50 per month,
it's not bad, is it? So I will stick default
was this one, I think. Doesn't really matter for me, as it's only going to be temporary. But anyway, we can
scroll further. We can change the
name for that server, although this doesn't
really matter. I will call it Marx. What was it? WordPress.
Okay, create instance. That's it. The server
is being created. You can see it took only what? 5 seconds, not even 10 seconds. And it's great out,
but I can refresh. And you can see this now is
blue and I can click on it. This will give us more
information about our server. The most important bit is
public IP VFO address. But what you can also see
is Access WordPress admin. This is the Admin
console for WordPress, and we will get
back to that soon. First of all, let me copy
that public IPV four address. I click on those rectangles. I click Plus to create new tab and I will paste it
here, paste and go maybe. Um, nothing is going on
yet, but you know what, sometimes it takes a
while because this server has just been created and I know it says running,
but you know what? Let's give it a minute or
two or maybe even five. For the time being,
I will just go back here and keep refreshing.
Still nothing. So let me pause video and I will get back when
it's up and running. Okay let's try again now. Okay, it's doing
something you can see, right. That's our website. I mean, this is like a template
that WordPress gives you, and you can simply
play with that. So we've got some pictures, you've got, you know, well, honestly, you can
remove that all and create something
completely different. But, you know, if this template
is close enough for you, you've got sample pages. So it's a sample that
WordPress gives you, and we can amend it easily here. I mean, not here, but
in the admin console. So I will show you
also maybe how to access that admin
console. Might be useful. If you click that button, you can see the
dashboard can be opened by clicking this
link to WP Admin. Click that, it will open
yet another window, but we need credentials. Where I can find
those credentials, if I go back here
and close that, you can see user name is
easy because it's user. I can copy that
super lazy copying for letters, but never mind. It's a user password
is a bit more tricky, but it's not rocket
science either. If we go back, you can see default Workplace
admin password, retrieve default password. But before I click that, let's
scroll down a little bit. Look here, connect using SSH. This bottom will let us
connecting to the server itself, not to the website, but to our server which is
running that website. So if I click that it
opens another window, and this is my server. This is the private IP
address of that server. And for example,
if I run the FH, you will see it has 40 gig because that was
provision at the very beginning. It has 40 gig SSD drive. So if you want, you can
install even more stuff, not just website and web server, but we will need that
for something else. We will need this to
retrieve our password. If we go back here or I
can even scroll down, this is our public IP because here you will see similar instruction,
Access default password. If we click on that, there
is a little instruction, but it's very long and maybe unnecessarily
because what we really need, this is the instruction.
Look below. Somebody just pasted cut bit
nami application password, and this was their password. So what we can do, we can
just copy that last line. It's exactly this command. This is what we need.
We will copy that. We will paste it
here in our console, presenter, and this
is our password. So now we can paste it here, and we will be able
maybe remember. Well, I'll remove this
server shortly anyways, but this is the dashboard where we can manage our website, that sample page
that they gave us. We can replace it. We
can change something. Well, if we go back, we can
see we've got one page, and it's a sample page. Here I I edit, you know, if I put whatever, Mark has new website,
and I will update that. So now you can see page
updated in left down corner. If I go to my
website and refresh, that sample page is now Marca new website link,
and that's it really. This is your word website, but you can choose press to
shop or whatever you want. The thing is the server
is already configured. The public IP already
is available, so you can access
it from anywhere, and it's probably
the easiest and fastest way there is available. Plus you have three months
for free. Can't complain. Just remember at the
end, if you do not use it or if you think you might not play with
it for a while, just remember to remove it. As always, with
EC two instances, with any other resources,
you can do it even here. Those three little dots here, we'll let you delete
entire server. You can see that.
Big red button. Delete instance. Yes, I want
to delete it. Yes, delete. Production system. Okay.
And now the server is gone.
16. 10 AWS Route 53: In today's video, we will learn how to use domain name service. In AWS, it's called Route 53 and we will see exactly
how we can buy a domain, first of all, and
how we can then set up a domain name
for our website. Because, in fact,
in previous video, we created a web server with our first website and we use Light sale service
for that in AWS. We were connecting to our
website using IP address, like HTTP and then
our IP address. That's not how we usually
connect to the websites. We usually type something
like youtube.com or google.com rather
than typing IP address. So today, we will see
exactly how we can use name rather than
the IP for our website, and we will see the
entire process, how it's done in AWS. The first thing we have to do is open the service
called Route 53. We can search for that
service here called, as I said, Route 53. Here scalable DNS and
domain name registration. Let's click that and here, you can purchase your
domain. And what is domain? Or domain is like YouTube or Google or any other website
that you usually access. The thing is, I can't buy YouTube because
it's already taken. But if I wanted to buy it, let's check what
AWS will suggest us because you can see
youtube.com is not available. But you can see
some suggestions. You can buy, for example, it's youtube.com for $13 a year. Well, the next one is more
interesting only youtube.com. You can have that
one for $13 as well. Maybe I want to buy
on youtube.com. What I can do is select here, and it takes you straight
to the checkout. If I click that bottom, this domain is mine. And you could create the
website called ontube.com. I will not buy it because I already have some domains.
I purchased some. If you bought it, you would see them in registered domains. So I want to leave,
and as you can see, I already have five
of them purchased, and you can see also
expiration date because it's price per year. But you can configure a renewal, so you will pay that
$13 every year, and AWS will take care that this domain still belongs
to you and nobody else. You can see all those
domains that you purchased here on the left
side in registered domains. I can do next step now
and go to Hosted Zones. If you click here in
the hosted zones, your domain name, you will
be able to create record. What it means, you
will be able to point your IP address to the name of the domain
that you just purchased. But before I do that,
I actually have to create website because we did it in light sale
in previous video, but I removed it, so
let's create it quickly. We remember that we just create instance and can be
WordPress, blah, blah, blah. I will just create instance. What I really need is just
the public IP address. We can play with that. You
can see already created. This is my WordPress
website in AWS, and this is its public IP
address. I can refresh that. If I go there, this is
my public IP address, so I can just copy it, open
a new tab, paste and go. So this is my website. I mean, that's the sample page that Wordpros gives
you to play with, and we saw the process
in previous video, but that's not important. I wanted to say we accessed
it using IP address. We can do better
than that. I have that address copied
already, the IP address. What I can do now, I
can go to my Route 53, and now I can create record. If I create that record, as you can see, you can keep it blank to create a
record for root domain, which means automation Avenue, my website would be available
at automation avenue.com. Or if I want, I can create
subdomain. Let's say Marek. Maybe I will create
that subdomain. It adds little dot here, the full name will be Mark
dot automation avenu.com. Here below, I just paste
that IP address, I copy it. This is public IP of my website
and I can create records. But before I click that, I will show you other records that you can also create
here if you want it. For example, quadruple A
is for IPV six address. If we go back here
to our light sale, we can see we've got
public IPV four address, but we also have public
IPV six address. If I want I could create
another record here in Route 53 for IPV six,
IP address as well. And then I can create a
MX record, for example, if I wanted to set up a
mail server and many other. I just wanted to show
you that it's not the only option available,
but that's what we need. A record routes traffic
to IPV four address, and it's chosen by default. So let me create that record. Takes 2 seconds, and you
can see it at the very bottom mrec.automationvnu.com.
Which means what? It means that I
can now go to HTTP mac dot automation avenue.com, enter, and you can see it takes me to exactly
the same web page. But here we access it using
IP address and that one is accessed using the domain
name that we purchased. I hope that makes sense, and just do not forget when you
stop playing with that. Remove that light sale instance. And if you remove the instance, you can also remove that entry. You can go here to Route 53, click on that button, and just delete that record because
it's no longer needed. Because the next slide
server recreate, it will get different
IP address anyways, so there is no need to keep it.
17. 11 AWS Centos 9 and AMI subscription: Today, I wanted to quickly
show how to create Centos instance in AWS Cloud. As you might already know, most previous Centos
versions like Version seven or Version eight are already obsolete or will soon
become end of life. So many people will want
to move to Cents nine, which is much newer
and will be supported till around mid 2027, I think. But the installation
process is exactly the same for Cents eight
and Sentus nine, so that's why I mentioned
them both in the title. So, okay, let's create
that Cents server then. So okay, we are in AWS console. We go to EC two. As you can see, I don't
have any instances running, so we can click on
Launch Instance. And we can call it Santos maybe. We then scroll to
the AMI portion and this quick start selection is usually where you find something you are interested in. But not in these case,
as you can see there is no Centos operating system. What we have to do here is
click that Browse More AMIs. If we click on that,
it might take a while. Oh, as you can see, now, it displays 9,373 images, which is a lot, but I notice sometimes it takes a while
until it's displayed. And as you can see, it says AWS and trusted third parties AMIs. So you have to switch
from Quick Start AMIs to that AWS marketplace. It's a huge list of
many different AMIs, but we are interested in Centos, so we search for Sentos. And when I click Enter, my guess is similar to Amazon's that you will be interested in
one of the top ones. And I'm not going to dwell
into what is stream exactly, but I guess that's probably
what you want to install. What is very important, though, it's Senta stream nine
from Amazon Web Services, because as you can see, below, you have Sent stream
nine as well, but it's from hand way
software technology. Or if you scroll even further, you've got with support
by supported images, and the company is actually
called supported Images. So what's the difference then? The difference is
very important. Once you are in marketplace, you have to subscribe
to those AMIs. So if I select Centos Stream nine from Amazon Web Services, if I click Select, if
you go to pricing, you have an overview here. It says, typical
total price $0.21, but that includes price of
the server itself, T Tremall. So it's better shown in pricing. If you click the pricing tab, you see that the Centos itself
doesn't cost you anything. That 20 something cents
come from Titmall server. But we already know we can use Fretier if we have
Fretier still valid. We can use a server that is within that free tier and we don't have to pay
anything for that. If I pick this Centos
operating system from AWS, and if I go for one of those micro instances that are
included within free tier, then it will not
cost me anything. But if we go back,
if I cancel that, if we go back to any of the
other ones like this one, and then we go to pricing again. Now you can see you not only have to pay
for the instance, but you also pay for the
operating system itself. Because you get a support from that supported
images company, you have to pay
for that support. That's very important
difference. As you can see, I have to
subscribe to that AMI. What that means is that I agree to everything
that is said here. You have to read that,
the product details, et cetera, and
especially pricing, the most important
information you are probably
interested because I, for example, want to use one that I don't have
to pay anything for. Usually those from Amazon
Web Services will be free, but you always have
to read that pricing. The last important bit
is the architecture. This is X 86, which is like Intel
or AMD processor for Intel processors
or MD processors, I mean, you have also,
if you scroll down, or you can see here
Centtream eight, but for arm architecture. So this would be for those
graviton instances from AWS. Okay, so let's go back to the very first one Cents
nine for X 86 architecture. I select this. I get familiar with all this
stuff that is here. Pricing is zero, that's
the most important for me. I can subscribe to that AMI now. I click Subscribe now.
It should take me back. As you can see, it
took me back to the previous page and I can
continue creating my server. Now I simply scroll further. I will pick the instance that is free tier eligible because
this is new accounts, so I still have free tier. We created this account in
one of the previous videos. That means the operating
system is free and the T two Micro is
free for a year for me, 750 hours every month. Keep a login, I can pick the one we created and everything else, I can just leave as it
is, and launch instance. But when you click
the launch instance, let me click now, you might see you even well, there isn't even information. Usually takes 30 seconds, but it can take up to an hour. That's because of
that subscription. You subscribe to that AMI an agreement has to be
created in the background and the email will be sent
to your email address stating that you created a
subscription to that AMI. And as you can see, well, okay, so this one was pretty quick, but as I said, sometimes
it might take much longer. Depends on which AMI you choose. Usually, when you
choose third party AMI, it takes much longer
because if you choose the one supported by
Amazon, it's usually quick. Now we can click
on our Instance. We've got some details,
still initializing. Let's give it a few seconds. So yes, that should be our
Cents nine up and running. And if we pick that second
image from the top, it would be Cents
eight up and running. So as you can see, the procedure is exactly the same for both. I can just now click Connect, and maybe not here. Maybe we use as a sage client. So I can just copy this, go to my terminal, paste it. And it should take me.
Well, I have to confirm. Yes, yes, I know
this is my server, and I'm logged onto my server. If I run CAT CAT
not car, let's see, OS release, I can
see it's indeed Santos stream in version
nine, and that's it. That's really all I
want to show you today. Just do not forget
once we are done, let's go back to instances
and let's remove it. So it doesn't take our free
tier hours unnecessarily.
18. 12 AWS Workmail: Today, I want to
quickly present how to set up an email hosting and how to create your own mailbox
using AWS Workmail service. For example, I bought
a domain automation avenue.com and I created
my own inbox and user. My email is now Mark at
automation avenue.com. Maybe, I don't know,
smith.com domain. You want to set up an
email like john at smith.com and you are
not sure how to do it. I will show you probably the easiest possible
way in this video. Let's go to AWS to Route
53 to register domains, and you can see that I've got automation Avenue already bot. I've got also another
one, dentalnerst coq. This is one of the many
abandoned projects that we can use
for this purpose. So as I said, we will use
dentalners dot code dot k, and maybe I want the email
Marek at dentalnerst coq. But before we do anything, let me just maybe duplicate this tab and we will
go to hosted Zones. I wanted to show
you something else. We go to Hosted Zones to
dentalnurs dot code dot k, these are two entries that
Amazon sets up by default. I mean, I simply didn't
touch this domain even. This is what it will
look like when you buy a domain in AWS. You need that source of authority or whatever
it's called. This will be already
there configured for you. But because this will
change later on, I wanted to show you
what it looks like now. Let's again duplicate
maybe this tab because the service we
need is called Workmail. So I will search for Workmail and we click Amazon Work Mail. This is the one that
we're interested in. Amazon Wormil is only supported currently at
least in three regions. So you can choose the
closest one to you. For me, it will be the island. And here is the automation Avenue where my email is hosted. But we will use that dental Murs we will create new organization. We've got the button
here on the top right. I will click that
Create Organization, and we will use existing
Route 53 domain, and I will pick my domain, which is dentalnurs
dot code dok Alias, you probably want
to put the same. I mean, same name as your
root domain. So Dental Nurse. Whatever you put
here, it will be part of the URL we
will use later on. It makes sense keeping it all the same.
That's your choice. We will create organization. And state is requested, but it only takes a minute or already creating and active. It took not even a minute. Let's now click that
dental Nurse ink. And one thing to note here is this Amazon Workmil
web application. This is the link you can use to access your email
using the browser. But we will not click it yet
because we are not done yet. We don't have any
users or we don't have our domain linked
to that as well. So I would first jump to domains here on the
left, click Domains. I can click here that
dental nurse.co K. This is the domain I want to use for the email because sorry, just to recap, if I go back, AWS will create another
domain for you, dentalnurswsaps.com. That's something you will be
using to access your email, but that's not the email
address you want to use. I want to create Marek
at dentalnurs dot coq. That's why I click this domain. And now we have to complete
this step because AWS will check if this domain matches all the configurations
that is needed, and you usually have some stuff. I'm a bit surprised
because this is already green like
Wmil configuration. Sometimes you will have missing
bits here and there and, you know, in various places. For me, it looks like only
these two entries are missing. But anyways, it doesn't matter
what is displayed here. You simply click Update
All in Route 53. Once you click that,
all the stuff that is needed to be done will be
done automatically for you. AWS will take care of that, and it even displayed automatically
configured the domain. So if I go back now, to this tab and I refresh it, you will see now many more entries in
Route 53 for this domain, dental nurse dot code UK. This is all the stuff
that AWS added, so you are able to use this
domain as your email hosting. But the fact is you
don't really have to be concerned what the heck
doesn't really matter. We'll just go back, and the only thing we have
to do now is go to users and just create
a user for our inbox. You've got Add user,
and I will use Marek. But bear in mind, this is
the user name you will use when accessing your
email using the browser. It doesn't have to match
the email address. For me, it matches, but
this can be different. This is the simply user
name and password, let me put the password. You can create a
password for that user. You know what? I
changed my mind. I will use Mark User just
to distinguish these two. So you will clearly see the
difference between them. And here in the email address, I will change it back to Mark. So we've got user
name Mark user, but email address we need Mark. And this play name
may be Mark again, but we've capital. So
they are all different. And now the last
thing you want to change is you want to
really use your domain, not the one that AWS
created for you. So I want to use dental
nurse dot code at K, and this will be
my email address. Now, just add this user. And we've got our
primary email address, Marek at dentners dot code dot k. That's our email
already created and ready. And we can access
it by going back to organizations and well,
have to get there. This is the URL we want to use. So I will just click on it. And this is the user name. So it was Marek user. Remember, not Marek, and the password I
created for that user. Once I click Sign in, this is our inbox. And here in top right corner, we can see the displayed user. That was the capital
letter, remember? So user name, this user and email address are kind of like three different things. But never mind, let me send the email to that
email address now. I will just use
my phone one sec, compose to Mark at
dentalns dot code dot, subject first Email and compose, well, first email test maybe. Now just send and let's
give it a few seconds. Maybe to refresh. Oh,
it's just showed up. You can not only receive emails. This is the first email,
but you can also respond to them from Marek at
dentalnurs dot cod dot k.
19. 13 AWS mount s3 bucket to server: You might already be familiar
with Amazon ST service. It's a service in AWS Cloud that allows you to keep
unlimited amount of data. You can create so called bucket
there to store the data, and then you can also mount
that ST bucket to a server, basically creating unlimited
disk space for it. So how do you do that? It's
easier than you think. First, your server
needs to be able to communicate with
your AWS Cloud. Using AWS configure is
one of the options. Let me open my terminal. This is my server. This is
my local server at home, and we can type AWS Configure. Now you can use credentials
for the IM user, and how to do that exactly, we discussed it in one of
the previous materials. So that is done,
and then I can use AWS ss command to make sure that my server
can see the ST bucket. I've got in AWS. This is what my
server displayed, and that's indeed my
ST bucket within AWS. Once you have all those
prerequisites working, you can then go to Google
and search for Mount Points. We need that link
from the Github for me it's first
link available. So I will click that and we can scroll down to search
for Instructions. My server is Ubuntu server, so I can use those commands. However, I have got
arm 64 version. So as per instruction, I have to replace
this bit with arm 64. Let me copy that, but we'll paste it first
to the text editor, and I will replace
this bit with arm 64. Now I can copy both of them. And in theory, I could
paste it right now, but it's always a
good idea to run the update and upgrade
commands first. So upped get update
and upped get upgrade. Once that's done, let
me clear that maybe I can paste those commands
and click Enter. Once that's done, I should have Mount ter
command available. I can check it by
simply typing it. It gives me an error,
but that's fine. The command is working.
It just says it requires some arguments like
bucket name and directory. Let me clear again.
We move it further. We check the
instructions, it says, We need Mount at command, then we need the name of
our bucket at bucket within AWS and then path to any
directory that we want. Let's do that then. We
go back to terminal, and let's create that directory first, the last bit we need. So we are in the root directory. We can use media folder
if we see the two media. We've got something, I
don't even know what it is. I will create new folder then. I will create folder
called Marek. And now I can use that
folder as my mount point. Let's maybe copy this and we will replace
the items we need. Path will be media marek
and at bucket is this. That's my comment.
Be St bucket mark is mounted at media mark, which means we can go
there now to media Mek, it should be empty and it is, so should be my ST
bucket within AWS. Let's get there. You can
see there are no objects. We'll go back to my server
and we will create a file. Let's touch testfile dot TXT. Now I should be able
to see it here. But I should also be able
to see it in my ST bucket. I just need to refresh
it. And there it is. I will create yet another one
called Test file two doTHT. Now we have two files and in AST we should also
have two files. But if I will remove them now, They now have been deleted, so they should also be
gone from my server. So if I type this command again, these files indeed have
disappeared. Thanks for watching.
20. 14 AWS Terraform: We will have a look
at what Terraform is and what we can use it for. Terraform is possibly still the most popular
infrastructure tool. Infrastructure as
code means that this tool can be used for
infrastructure provisioning, mainly in Cloud, but it's
not limited to Cloud. Then you can keep
your infrastructure in the form of a code, which is very handy
because you can copy it, you can version it, you can
store it remotely, et cetera. It's also declarative,
which means you declare what the end
result should look like, and terraform will figure out itself what has to be
done to achieve that. But I don't want to
bore you with theory. It's easier to show exactly
how terraform works by simply installing it and creating something in the cloud. Let's start with
terraform installation. I will Google something like
how to install terraform. Terraform is tool
from HashiCorp, let's click on that first link. Let's scroll down, and these
are the instructions on how to do that depending on what the operating
system you have. So I've got Linux, so
I will click on that, and I'm on Nubuntu so I
can use those commands. Let's just copy this one first, open the terminal, and
we'll just paste it here. And the next command,
install the DPGKey. Let's placed it as
well. That was quick. Verifying is
optional, I believe. I will click this one now. That believe it's the last
one installed Terraform. Just double check if there is nothing else Terraform help. We can verify if the
terraform is installed. You can see the
available arguments. That means the terraform
was installed correctly. Let me clear maybe. That's fine. But this is terraform installed
locally on our laptop. However, it has to be able
to connect to our AWS Cloud, or at least in my example, it will be AWS Cloud. So I have to connect it somehow. And in terraform language, this AWS is called provider.
So this is my cloud. This is my AWS Cloud. This is the infrastructure
provider for terraform. This is where we want to
build simply something. Can be your on prem data center. It can be maybe you have
other cloud like Azure, GCP, digital Ocean, or maybe something
else. I don't know. But because my provider is AWS, let's Google something like Terraform AWS provider
configuration. Let's check that first link, and that's exactly what I need. AWS provider. Let's scroll down. This documentation
will always give you some examples, which
is really handy. This is the example of
the code we can use. But let me scroll a
little bit further because I need also information, how do I connect to
AWS first of all. Here we've got provider
configuration. Credentials can be
provided by adding the access key and secret
key, and you know what? Maybe we can use that.
What I will do now, I will go back to my terminal. I will create new folder so we can keep our
code in this folder. I will CD to this newly
created folder now and we can start creating our infrastructure
using simple text file. I will use VIM, and
then I can call it whatever I want,
maybe my server. But I have to give
it extension dot TF. This way, Terraform will know it has to read this
file and it can build the infrastructure based on information inside that
file. So let's click Enter. And we need that first
portion that was here. We will copy that. I
don't need the last bit. I just need the provider bit. Past it here, but I will also change this region because AWS, I usually build
everything in London. As you can see, there
are no instances, there are no servers,
nothing here, but my region is usually London, which is Uwst two. So let me change
that to Ust two. That's it. Then
what we will need is that portion
with credentials, so we are able to connect to our AWS cloud in
the first place. You can see it's
repeated information. Prod the AWS region,
provider AWS region. Let me just remove
that duplicate entry. And we need our access
key and secret key. I can't remember if I have
one, but it's not a problem. We'll just go back and this may vary from provider to provider, but how it's done in AWS, I can simply go to IAM
that's the service in AWS. I can go to users, administrator that's
who I am right now. So I will click that
security credentials, and then Access Keys. I will create new access key, and I will need the first
option command line interface, and I will create new
keys, create Access Key. And this is exactly
what I need for our terraform to be able to
connect to our AWS Cloud. Access key is this. I will
copy it, paste it there. And the secret key is that. I will copy it. By the way, it's very important you do
not show them to anybody. I mean, I am showing them, but they will be removed
once the video is released. Actually, I will
remove them right after I finish the recording, but it's important that
you keep them safe. Whoever has those keys will be able to access your AWS cloud. So you have to keep it secure. But for these purposes,
I will just copy them. And that's it. That's our
terraform configured. But that's only
the configuration so terraform can
connect to our cloud. But what if I want to build
something in that cloud? We were in that EC two tab. This is the area where
I can build my servers. As you can see, I don't have
any instances right now, but what if I want to build
a server using terraform? To do that, we can simply
go back to Google and search for something like
Terraform, create AWS instance. Because server is
called instance in AWS. Let's click on that
first link, and again, you have some examples how
to build AWS instance. Let's see what we have here. You will have multiple
examples, again, depending on exactly how
you want to build it. But let's go back to the top and maybe use the
very first example. And what I can see what it does, it has two portions. This first portion searches for the newest Ubuntu available because AMI is Amazon
Machine image. It's simply an operating
system for our server. Or sometimes it can be operating system and some
programs pre installed. But in this example, it
will just search for the latest Ubuntu
image for our server. And then it will use it here in this line. You know
what this is fine. This example will work fine. So we'll just copy
it all and we'll paste it to our telephone code. That's basically it. Let
me just save it now. So if I will now
preview that file, we now have all
information that we need. This portion will tell Terraform what we
are working with. We are telling Terraform,
we are working with AWS as our provider. This is the information
how Terraform can access that AWS Cloud, and this is what we
are actually building. This is the AWS instance. We are building our
virtual server. Let's go back to AWS
to double check, there are no matching
instances means I have no instances
running right now. We'll go back here to
Terraform and first, I have to run command
Terraform in it. And this command will
pull all the modules that it needs to be able to work
with AWS as a provider. Because even though
you install terraform, it doesn't have all modules installed because it
wouldn't make much sense. Terraform can work
with many providers, and there is no point to install all the modules at the same
time during the installation. It will simply pull it at this stage just for
the relevant provider. In our case, AWS.
But that's fine. This is dum. So next command
we need is terraform plan. And what Terraform plan does, it really tells you
what is going to be built based on the
information in our file. So it found the AMI,
that machine image, the operating system
for our server, and most other
information is known after reply because we didn't
give it much information. Instance type is the size of the server, but
that's basically it. So what we can do now, we
can run Terraform apply, and terraform apply
will actually start building this infrastructure
for us within AWS. So let's run it Terraform apply. And Terraform apply runs the plan as well because we can see the
same information again. So we can have a look. The
plan is to add one resource, which is our server,
and the resource can be anything we build
within that AWS. But this time it asks
us if everything we see above is what we really want
to build within that cloud. So the answer is yes, that's what we want to build
and click Enter again. And now the resource
is being created. So if I go to AWS
now and refresh it, you can see the server is
now, well, still creating, it says, but that resource
is being built now. So I struggle to have it. Well, the thing is,
it's finished already. It says creation completed
after 13 seconds. That means our server
now is up and running. If we refresh it. Well,
it's still initializing, but basically it
created the server. That's the public IPV four
address of our server. Oh, I didn't even notice
it has a name Hello World. That must be somewhere
in our code then. Let's go back to our code vim my server.
Let's have a look. Oh, yes, there it is.
Name hello world. But we have built
our server, yes. I want to show what that
declarative really means. That terraform is
declarative approach. Let me now run this
command again. So I will just up
arrow and I will run terraform apply. Let's
see what happens. Click enter. Have a look
at that. No changes. Your infrastructure
matches the configuration. So Terraform didn't
run the plum. It didn't ask us if we
want to create anything. It already knows what
we have in our file is exactly what we have
in our cloud as well. But how does it know it? If we go back to the terminal and we check
the available files, Okay, I didn't show what
it looked like before. Basically, we only
had this file. We only created
my server dot TF. Those files, Terraform
Tf state, Log, and this hidden folder
were created by Terraform when we run those terraform
init plan and apply commands, and Terraform Tf state
is the file that will have all the information about our resources
in the cloud. I can run Hot
terraform Tf state, but it's very long output. What I can do instead, I can
run Terraform show command, which displays a little bit more human friendly
version of that. It will give me all
information about my instance. For example, my
instance ID is this, and terraform made
a note of that, and you can compare
it to the Cloud. You can see it matches 53 E, it ends with, and that's what terraform has
in its TF state. So it simply compares what is in TF state and what
currently is in the cloud. And if this
infrastructure matches, there is nothing for
terraform to do. But I want to go a little
bit further with that. Let's create another server. How we would create another
server in terraform? We can simply go
back to our code, vim my server, and I will
just copy this last portion. And let's paste it below. But to distinguish
it, I'll change the resource name,
which is this bit. We'll call it web too, and we don't have to, but
we can also change the tag. Hello world maybe
underscore too. Now let's save it and run Terraform apply again.
Let's see what happens. Teraform tells us we can already see here
hello world two, and it plans to
add one resource, which means it plans to add only one server because one
is already up and running. It only has to add second
one to match what is in the code and what is in the AWS cloud. So
we approve that. Web two is now being created, and that's it.
Resources, one added. So if I refresh it, I should see hello world two this
time, second instance. And now I can run Terraform apply again, as you can guess. It will say infrastructure
matches the configuration. But let's mess up our
configuration in the cloud. Let's remove this first server. I already ticked here, I'll just say instance state, terminate instance, and
I will terminate it, but I will terminate
it in AWS console. We can see it's shutting down. So we've got the new
instance running, but the previous one is
now being terminated. Let's refresh. Now there is
only one instance running. Let's see what terraform will do now if I run terraform apply. Okay, maybe I was too
quick to actually do that because that instance
is not fully terminated. If I remove this bottom, it's still saying shutting down. But let's give it a
few seconds more. Oh, already terminated. So now, sorry if I run it
again, Terraform apply. Now it should tell
me one to add. Why? Because it noticed this
first instance, remember, that was the ID ending
with 53 E. Terraform realized that this instance
it no longer exists, so it wants to create
it back because our code says we need to
have two instances running. So I just type yes, and it
will recreate my instance. Now that the ID will
be different now because every single instance
will have different ID, but this doesn't really matter
what is important for us, that we have two running
instances in the cloud. So we've got, again, one
that is called hello world, and it's ending now
with one e three, which is what Theraform
made a note of. So that's really what
declarative means. You declare something
in the file, and if anybody messes
up your infrastructure, it's not the end of
the world because you simply go back
to your terraform, you just apply that
infrastructure, and terraform will figure out what has to be
added, removed, et cetera, to bring
back the infrastructure up to this stage that you
declared in the file. Hope that makes sense, sorry. Oh, yes, and by the
way, how do you remove those instances
using terraform? Let's say you no
longer need them. So what do you do? Terraform has a command called destroy. So it's simply
terraform destroy. And if we run it, it
will create plan again, but this time it will be plan
to destroy our instances. If that's not only
instances, sorry, but also volumes and
everything that comes with it, that's important to note
it's main difference between clicking something in G here
and doing it in terraform. Terraform does proper cleanup. It will remove not only instance
but volumes, et cetera, everything that was built during the terraform
apply stage. We just click, yes,
we want to destroy them and we just wait, destroying those two instances. If I now refresh, I
should start seeing, first they are shutting
down and after a while, they will be terminated. And now they are destroyed. It took a little bit longer than expected to fast
forward the video, but if we refresh it now, the state should be
terminated for all of them. That's all I wanted
to say in this video. Thank you for watching.
21. 15 AWS deploy container to ECS: Today, we will deploy
containerized application to AWS Elastic Container
Service or ECS in short. We will go through
it step by step, which will include
building the Docker image. We will push that
image to AWS ECR, which is Elastic
Container registry. You can keep your images there, and then we will
create ECS cluster, ECS service, and task that
will run that container. Decide that I'm going to use only AWS command line interface for that and in AWS Console, I will only show
you what is being built after we run
each AWS CLI command. I just want to show
you that you can work with AWS Cloud
in many ways. You can obviously click buttons directly in AWS
Console to build that. We also saw in previous
videos how to use Terraform, which is infrastructure
as code tool. Today, I thought maybe
we will just use AWS CLI to perform all those
tasks because why not? Can deploy any container
you wish this way, but the container I decided
to use is the RMBG container. It's an image with
Python application that lets you remove the
background of the images. Maybe instead of explaining, let me show you what this
end product will look like when we have finished
the entire process. I will simply go from any
computer in the world. Once I have it deployed, I can simply go to the browser. Let's open the browser, and I can go to the
URL of my choice. So I chose automation Avenue. This is the domain I own. So I just added that
REMBG in the front. The application
works on port 5,100. This is this application. It's running on port
5,100 and what it can do, it can remove backgrounds
from any image I want. I can either drag and drop
or browse for that image. I don't have any
images here though, so let me download one quickly. Let's say elephant.
What about this one? Let's save it in downloads. So now I can go to
my application. I will browse for that file. I will pick that elephant, and I just have to click Select. After a few seconds,
as you can see, the new file was generated, but this new file has
the background removed. That's all it is. And we will build that application and deploy to ECS today
step by step. Let me destroy everything,
and by the way, don't try to access this
application on this URL because I will destroy it once I have finished working
on this video. But as you can see, it's a simple tool you can
run from anywhere, no photoshops, no time wasted. You just click on the file and the new one with no
background is generated. I've destroyed everything now, and let's start creating
it from the scratch. First, I want to create a VPC. If we go to service VPC, you will see one
VPC already there, but it's the one that AWS
creates for you by default. And as I said, I could
click here, create VPC. It would create new VPC for me, but I want to use AWS CLI only. And the only
requirement is you have to have AWS CLI configured. We talked about how
to configure AWS CLI, so I will not repeat it here. I will keep pasting
the commands, and you will see what's
happening in the background. This is the first
command I need. I want to create a VPC with cider block of 10000 slash 16. Press Enter, and that's it. VPC has been created. So if I refresh now
in the background, you can see a new VPC with
the prefix I requested. But it doesn't have
any name here, so we can change that. But before we do that, let
me run one more thing. I want to save this VPC
identifier as a variable, and I will call that
variable VPC underscore ID, and then I will copy it together
with the ptation marks. Which means now if I echo that this terminal will simply remember
what my VPC ID is. Let me clear that maybe. So now I can run
command like that. I will create a tag for my
VPC with the value Marx VPC, and I will use that variable instead of the actual VPC ID. I hope that makes sense. Let's press Enter. And
now if I refresh again, this VPC now has a
name of MrexVPC. Now I have to
create two subnets. I have to create two because we will have to
build load balancer and the load balancer requires
at least two subnets. So that's what we will do next. This is my first command. I use that VPC ID variable again. I will build the first subnet in zone Us two A with the cider block of ten
to zero dot one to zero. So let's press Enter,
and that's it. That's my first subnet. And what I'm going to do, I
will create another variable, and I will save this subnet
ID as that variable. I will call that
variable subnet one ID. Let's first go to subnets and see if it was
created or not. I I refresh, you
can see new subnet, you can see three other subnets. These are also the
default subnets that AWS creates for you, together with the default VPC. We are not interested
in that one. We are interested
only in subnet that belongs to Marx VPC,
which is this one. And we have to
create another one. So that's clear. Now, this is the
command to create another subnet in different
availability zone. It's two B, as you can see, and with different
prefix, it's 10.0.2.0. So let's press Enter
again. Let's clear. The subnet should be created, and we can see where it is. It's a new subnot
and I will save the subnet ID as a
variable as well. I can copy it from here. I can use this like two
rectangles to copy the subnt ID. And go there. I mean, you don't have to
create those variables. It's not necessary, but
the more commands we run, the more you will
see why I do that. It's much easier to
reference the subnet to ID rather than this
long, blah, blah, blah. Okay, we've got VPC, we've got two subnets. Now we have to create
Internet gateway. So whatever we build
in those subnets, it will be able to
access the Internet. And this is the
command. AWS easy to create Internet gateway. Let's press Enter and
new gateway was created. We can see it here,
Internet gateway. And we can see it was created, but it's in detached state. But that's fine.
For the time being, I will just save
it as a variable. I will call it AGW underscore
ID, Internet gateway ID. And now we have to attach this Internet
gateway to our VPC. And this is the command, and you can start seeing
why I use those variables. We've got one here,
we've got one there, and I don't have to remember all those long digits,
those identifiers. I can just reference my
variable. So let's press Enter. And if we refresh it now, now it is attached to our VPC. And by the way, when
you create a V new VPC, a new route table is also
created. Let me show you that. So that was created
when we created VPC. But what I want to do, I want to save this identifier
as another variable. It's called RTB underscore ID. And if we go back to the route
table and we check routes, we can see there
is only one route. There is local route to
the prefix of the VPC, ten dot zero dot zero slash 16. But we will need
another route to push all the traffic
that does not go there, then we will want to push that traffic to our
Internet gateway. And then the Internet
gateway will have to sort out where to push it even
further towards the Internet. So let me clear that. This
is the command I need. Create Route, Route Table ID is the variable I
just saved, RTB ID. Destination cider block
is 000 slash zero. That means everything
else, all other traffic, and we say that it should
go to our Internet gateway, which we have saved
as IgW underscore ID. Let's press Enter. Return
true means it was added. So if we go back, if we refresh and click our routing
table routes, we can now see two routes. One is the local to all
resources within the VPC, and the other route is like all other traffic should
go to Internet gateway. And this is our Internet
gateway, AA six. And that's all
really we need here. The last thing I
have to create using AWS EC two command is
the security group. If we go to security groups, currently you can see there are some default ones that
I'm not interested in. I have to create my
own security group to allow only the traffic
to my application. And remember that application
was listening on port 5100. So again, let's open
terminal. That's the command. We'll create security group
called ECS underscore SG, description security
group for ECS tasks, and VPC ID is actually a
reference to my variable again. So let's presenter,
and that's it. Security group was created. Let me just quickly save
it as another variable. Let's call it SG underscore ID. So if I go back and refresh, I've got new security group, but it doesn't have
any inbound rules. You can see no
security groups found. So let's add a rule to
that security group. And this command will add a
rule to my security group, and it will say
that I'm allowing the port 50 100 from
anywhere in the world. It's that last 000 slash zero. That means from anywhere. So let's press Enter. And that rule was added, means if we go back
and the refresh again. That is our inbound rule. It allows protocol TCP on port 5,100 from anywhere,
and it's inbound. What we will do next, we will
build our load balancer. So if you go to service EC two, load balancers, you can see
there are no load balancers. So let's build one. This is
the common pretty long one. I will call it
REMBGRmove background. ALB, ALB means application
load balancer, and it will be deployed
to two subnets, subnet one ID and subnet two ID, and I use my variables. But as I said, you
don't have to use them. You can use actual
subnet identifiers. Then we will add
security group that I saved as variable
SG underscore ID. Scheme is Internet facing
and type its application, its application load balancer. Let's just click Enter. Okay. Must have messed up something with the subnets.
Let me check the history. Oh, look at that. I saved subnet one ID
and subnet two ID. I actually used the same subnet. This is wrong. Subnet
two is wrong. A for A. Let me duplicate this. Let's go to VPC Service. Subnat. You can see A
A is first subnet one, second subnet ends
with E seven A. Let me copy that. Let me
save it correctly this time. That's fine. That
should do the trick. Now I will use up arrow. I will run this command again
to create load balancer. And now it was created. If we go to this tab
to load balancers, if I refer the page, we can see it's
still provisioning state, but it's already there. And the load balancer will have that very long identifier. This is load balancer ARN. Let me save that as well as a variable. We will need that. So let me just clear, and I will put ALB underscore
ID equals to that long ARN. Let's enter, and now
we've got load balancer. But what we also need is
something called target group, and our ECS service will be a target group for
this load balancer. So we have to create
target group first. That's the command,
create target group. We will call it REMBGRmove
background target group. Protocol will be
HDTP on port 5100, and then we will
use VPC identifier. I mean, the variable
for our VPC. Target type will be IP. Let's presenter,
and that's created. And as you can see, target group also has that very
long identifier. So let's save it as
well as a variable. I will call it TG ID. If we now refresh this page, we can see our target
group was created, and there is one
more thing we have to configure here for
our load balancer, and it's called listener. Listener is like imagine it as a little thing
service running there that listens on specific
port on that load balancer. If we go back here,
if I paste it there, you can see previously, you
can see non associated. It's not associated to any load balancer,
that target group. But now if I run this
command, let's presenter. The listener was created, and now you can see
this target group shows REMBGALB as
associated load balancer. So you can imagine this as being the thing that
glues the both together. Awesome. Clear now. So we've got the load
balancer itself. We've got target group, and
we've got listener created. That's all we need for
our load balancer. Now we have to build a Docker
image for our application. We go to that website
and it's not my project. It's coded DIO DEIO. But this is actually
the guy who has a fire ship YouTube channel, so it's Jeff Delany,
and it's his project. He was kind enough to publish it and anyone can
download this code. Let's go to code, and I will just download
it as a Zip file. As you can see, it's
not now downloaded, so let me just go to download. I can still see the
elephant and the new file. But basically, this is what I'm interested in the RM BG web app. I will just unzip
it. Let's enter. So this is the unzipped folder. I will see D into that folder, and we can see more files. One of those files
is a Docker file. This is the application Python. This is basic Python file, and Docker file is the file that we need to build
our Docker image. But let's have a look first
on that. Let's clear. You can see that
little instruction, download this to avoid
unnecessary download. So we will do that as a last step before
we build anything. I can use Command W G
then paste that link. You can see is
downloading the file for me. Let's clear again. Now you can see that
new file called tont dotnx that was added
to all other files. So we're ready to build
our Docker image. Let me clear again.
The command I need is sudo Docker build. T stands for tag, simply what I want it to
be called that image. I will call it RAMBg and the
last but not least is a dot. Dot means I want to use
all the resources and the Docker file that is
exactly here in my location, and I'm still in that folder, remember, RMBg web tutorial. It's very important
that you are here in this location to build
this Docker image. Now I just press Enter. I have to type in
the pseudo password. And the Docker image
is being built. As you can see, successfully
tagged, I mean, built first, and then it
was tagged as REMBGU. Let's clear that and now
can run sudo Docker images. You can see the REMBGAp
has been created. Okay, so we've got Docker Image, but we have to
push it somewhere. We have to push
it to Amazon ECR. It's like a place where you can keep all your Docker images. If we go to ECR now, you can see there
is nothing here because it says,
create a repository. Well, that's what we have to do. We create repository first. I'll call it RMBgAP that
repository was created. So if I go here now and refresh, you can see I have now new
repository called REMBGU. Now I will create
another variable, which will be my account number. This is the account number, the Atitte finishing
with four eights. I will copy it. I will add
it as another variable. I will call it ACC ID. This is my Amazon
account identifier. Now, I want to tag my Docker
image in a specific way. I will tag the image that
we've just built REM Bg up. I will follow this. It's
a specific like URL. You've got your account ID, then the DKRECR
blah, blah, blah. I will use that
to tag our image. I will click Enter, and it
looks like nothing happened. But if I run this command, so Docker images, you
can see a new one. This is not image. This is like tag for that image. You can see the image ID
is the same like that one, but we can then push this tagged image to our
ECR. So let me clear again. I will run very long command. AWS ECR get login password, so do Docker login. And basically, you can
find that command here. It's a view push commands. That's basically it, right? I just copied that, but
I replaced my account, which is anyway you
can just copy it because it will be filled in
with your account number, but I used the variable
instead of this number. So let me run it. And
the important message is here, login succeeded. That means we are now ready
to push our image to ECR. That's the command,
so do Docker push. This might take a while. This image is around,
Well, I can't remember, but depending on your Internet speed,
it might take a while. Okay, so it took 7 minutes. But it's done now,
everything is pushed, and that's what you
should see like a long hash and
it's latest digest. So if we go there now to our registry to repository
if we go there, I can see my image. And you can see it's called
RMBgAp and the tag is latest. And the size is 924 meg. Perfect. So let's go
back to our terminal. I will clear Again, I know it looks like a lot, but once you have
all those commands, you can just copy paste them and have it up and running
in no time, believe me. Now, it takes long because I talk about every single item. But anyways, there are some few more bits that
we have to configure. First one is the IM role. This is basically a
set of permissions for our ECS service
to be able to, like, pull the image, you know, run in the cloud. It needs to have
specific permissions. So that's what we will do next. I will see the one level up, so I'm back in my downloads. And I will create a
new file here and it will be our trust
policy in JSON format. I will call it ICs trust
dot policy dot JSON. This policy will look like that. Don't worry, it will
be also included. I can save it,
call on WQ, Enter. This is my policy, and I will create an IAM role
based on that policy. Well paste this command, the role will be called ECS task execution role
and I will use this file, this ECS trust policy JSON
file as the template. I will press Enter.
Oh, already exists. So it looks like I didn't
remove everything properly. This IM role, I
forgot to remove. You will also have to run this command to
attach role policy to that role name to a service role called Amazon ECS Task
Execution Role policy. I know it's a bit complicated. I don't want to dig
into that too much. I will just press Enter.
All those commands will be included, remember? But that's all regarding
the identity management, I mean, now we can
go to ECS service. So we've got the container here. If we go to ECS service, Elastic Container Service, we can see currently
there are no clusters. It says, Let's create
an ECS cluster. I can run this command
ECS create cluster, and then the cluster name, RMBG cluster, I will call it. Let's press Enter. That's it. Cluster was created.
So if I refresh, it says, I've got cluster, but I have no tasks running. Basically how it works, you create a cluster
which is like a main area where you will
run your services and tasks. Then you can build
services there. And then in each service, you can have multiple tasks. For us, what it will look like, I will have just one cluster. Within that one cluster, I will have just one service, and within that one service, I will have one task running. So for the time being,
we've got the cluster. But I want to create
one more thing, which is optional, but
it's regarding logging. Can enable logging
for your containers, and I will do that because
if anything goes wrong, it's so much easier to have that logging than
to not have it. So let me clear that. This is the command I need. I will have to create
something called log group, and I will name it ICs RMBgAp. That will be my log group. So simply a point where all those logs can go
too. I will press Enter. It looks like nothing was done, but basically, if we go, if I duplicate this tab, the service where you will
find it is called CloudWatch. If we go to that Cloud watch, you can see logs and
here log groups. And this is the log group
that we've just created, nbgthp. Okay, we're
getting there. What we have to create now is something called
task definition. You can see in our
container service is. It says, No tasks running. Task is where you define W
container you want to run, how much CPU and memory
you want to use for that container and
some other bits like logging, for example, we will add logging
to our task as well, and you specify what resources will be allocated
to those tasks. So what I will do now, I'm still in my downloads, I will create something
called task definition. It will also be in JSON format. We call it
RnBgtaskdfinition dot JS. S be new file. Will paste it. Maybe let me make it
bigger for a while. There are a few things.
This is the name for this task, RMBGUctainer. Network mode is AWS VPC, and that's actually because
we will use Fargate. Fargate means the servers that we don't have
to care about. Fargate is like a bunch
of servers that AWS will run for us and we can simply
deploy containers to them, not worrying much
about the servers themselves because container
has to run on something. You can either run it on ICT
and you will have to take care of those servers of those
EC two virtual machines, or you can use Fargate you don't have to
worry about anything. Now, I will specify
what image I want to use for our container.
That's basically it, yes. Docker image is like
static file and the container is a running
instance of that Docker image. I will allocate two
gig of memory and CPU, it might not make much sense. When you look at that.
What thousand CPUs? No, this is just one CPU. 1024 means one virtual CPU. It's done this way
because you can have portion of one CPU. You don't even need to
run one some tasks, maybe you will put even 512. That will mean half of the CPU, or in other words,
half of the time, the one virtual CPU has half of this time will be
allocated for this task. And then container port, this is the port the
application listens on. It's 5100, and the host port, it doesn't have to be the same, but I will just
keep it the same. And this is the
log configuration. And important thing
here is that you use that log group that we've just
created in previous step. If you scroll down, you will
see I actually added those. These are also variables. This file will not
work as it is now. We will have to
substitute this with my actual account
identifier, AWS account. We will do it next.
But I wanted to point out that CPU and memory. Again, it's important that
these values are the same or higher than the values we have here for the
container definition. If you were running
more containers, these values would have
to be at least some of all memories and CPUs that you would
include here above. If I have two of
them, I would have to put 2048 and 4,096. You know what? Never mind. I unnecessarily
overcomplicate things. Let's just save it. Call on WQ. This is simply my file,
the task definition. But if we run CAT, we
still have that variables. What I will do now then, I will put this
export account ID. This is my AWS account
identifier. I will press Enter. So that variable was exported. And now I can run Environment
substitution command. I will take this
file that I've just created, nbgtaskdfinition
dot JSON. I will replace I mean, this command will replace
all those variables that are called ACC
ID with this value. And once that's done, it will save this file
as a new JSON file called Rnbgtaskdfinition
dot JSON. Let me just press Enter and
I will show you what I mean. This file has just
been created now. It's definition new JSON. And if I cut this file, New, Jason, you can see I have correct URL here,
there and there. Some of you might ask, Mark, why are you
even doing that? It's simply because
if you want to use those scripts for example,
multiple accounts, if you want to automate
that tasks, okay, it's more difficult
at the beginning, but it will make sense
much more later on. When you work with
multiple accounts, and you can just
change one variable. So while that might not make
much sense at the beginning, it does make sense when you have to create multiple accounts, multiple tasks in
multiple AWS accounts. All right. But anyways,
this is the new file. This is the version of
the file that I can use to register my
task definition. This is the command ECS
register task definition, and I will use this new file as a template for that
task definition. Let's press Enter. That's it. This task definition
has been created now. We are nearly there.
Let me clear that. The last command we have to
run is a pretty long one, so I divided it
using backslashes. We have to create service. The cluster, we will
use our cluster, R&BG cluster task definition, we will use that RMBGU task. That was under family
actually. It wasn't name. It was called family. So
we will use that family. We will use our load balancer. We will run one
desired count one, and the launch type will be ***. And in network configuration, we say that we run it it's possible to run it
in both subnets. And we will also attach
security group that we created for port
5100 previously. So it's pretty long one,
but let's just press Enter. And that's done.
What it means now, we can go to our ECS, we will go to our cluster we
can see the service running. It's called RMBG service. But for the time
being, it's zero out of one tasks running, and we have to wait
for a while because the deployment is in
progress as we can see. So if you go to tasks, you can see it's pending. Our task is just being created. So ECS pulls the image first. It will find the server. It will run that task. We can say, Oh, I forgot to
remove that two virtual CPUs, but we can actually
use just one. But anyways, we just
have to wait until the status changes from
pending to running. We can see desired
status is running. Let me refresh that.
It's already running. What that means at this stage, I can already access
this application. If I go to our load balancers, load balancer, I will
click our load balancer. Load balancer has DNS name. If I copy that DNS name
and paste it here, let's start with HTTP. We don't have certificates, so we use HDDP only. I will paste here the URL of the load balancer and I will
add the port, which is 5100. We can see our application
is up and running, and I can access it from
anywhere in the world using this URL because
this is public DNS. I can resolve it from
anywhere in the world, and I can access
this load balancer from anywhere in the world. But what I want to do
as the last thing, I want to change this
name because this REM BG, ALB, blah, blah, blah, who would remember
that? Nobody is. So I've got some domains, both, but I've got them in
different AWS account. And this is the
different account. And I bought a domain called
automation avenue.com. That means I can
create a new record here I can create a
record called a C Name, which says roots traffic to another domain name and
some other AWS resources. I can call it, for
example, REMBG, so that will be the address RMbg dot automation avenue.com, and I can point it to
our load balancer. So if I copy this DNS again, and I will paste it to that different account because it doesn't have to
be the same account. It doesn't really matter. I will point it to Load Balancer. So I've got the Sname pointing
to our load balancer. I create record now, even from that different
browser different account, I should be able to access it by pointing to
that domain name. That's the address.
If a presenter, I can also access
this application. But now I've got
something easier to remember nbg dot
automation avenue.com. Okay, so that's it. I will
destroy it again, though, so don't try to access it, but I hope you will be able
to build one yourself.
22. 35 AWS SSHuttle: Today, I want to talk about
a tool called S shuttle. S shuttle is a Python program that lets you create
transparent proxy and you can use that
transparent proxy as your free SSH VPN tunnel. A shutter can be installed
using package manager like App on Ubuntu or
Homebrew on MacOS, but because it's Python program, it can also be installed
using PIP install command. Doesn't really matter because today I will show you step by step how to configure it from
scratch and how it works. But let's start
from the beginning. Why would you even want a VPN? There are multiple reasons
why you might want VPN. Maybe you have a laptop
or PC at home and you don't want your service provider to look into your traffic. You don't want them to know what and when you are accessing. So you can encrypt
all the traffic that is coming out
of your device, send it to some remote server, and that server will forward
it further to the Internet. And then on the way back, it will tunnel it
back to your home. The service provider
will only see the SSH tunnel between
you and remote server, but they will have
no idea what is inside it and what you
are trying to access. But the same tunnel can be used for very
different scenario. Maybe you are traveling
to other countries, but you still want to access all your home
network devices, and you can use S shuttle
to do that as well. What you have to do before
your journey, though, you need to set up a
port forwarding on your router for port
22, and that's it. You can then install
and configure S shuttle on your laptop, and you will be able to access your entire home network as
if you were physically there. You just need one port forwarding rule on
your home router. In hacking terms, this is
called network pivoting. So you say that
you pivot through a network to access other
devices in that network. But we are not going
to hack anything, we would use S shuttle as our free VPN connection back
to our home in our scenario. But just think what
you can do next. Once you have the tunnel
back to your home or any other remote server
back in your country, now you can access all
your streaming services, which usually have some
regional restrictions. You wouldn't be able to access them directly when
you are abroad. The S shuttle can do even
much more than that. Let's say, what if you got
a laptop from your work? That laptop already has
a VPN configured on it. It's very often the case because your company wants
your connection to be secure when you are accessing their internal network,
that makes sense. But the S shuttle works very fancy way and it
can actually build SSH tunnel on top of that
already VPN encrypted traffic, which means you can
have a VPN tunnel inside S shuttle tunnel or that SSH tunnel
that is created. S shuttle can redirect that entire already encrypted
connection can redirect it back to your home or whatever server
you want first, and then it will be securely delivered there to
that remote server, then that outer
SSH tunnel will be stripped off and
the inside traffic will be forwarded further with only that one remaining
work VPN tunnel configured. But that means you can travel
to any country you want, but your connection
will be seen as coming constantly from
the same location. But that's enough of the theory. Let's see how we
configure that thing. I will create a free
server in AWS Cloud. I'm not connecting back to home, I'm connecting to
some remote server. And in AWS Cloud, you can configure
a server that will be free of charge
for entire year. There are also some
other Cloud providers. I believe Oracle has
a server that you can create and it's
free for life. I'm not sure if that offering is still valid, but you
have to have a look, but there are many
ways that will let you run server for free. Let's use that AWS. I'm in Region North California. That's where I'm going
to build my server, but I am personally in UK. I live in UK, so California
is very distant from me. But for our video, let's say I live maybe in California
and I'm traveling to UK. Doesn't really matter.
Let's just launch instance. It means create a
virtual server. And I will call it, I
don't know, a shuttle. I will scroll down. I
will choose Ubuntu. Ubuntu is free tier eligible. And then I will scroll further. Tito Micro also says
free tier eligible. And what that means, it says
on the right, free tier. You can run that server
for 750 hours every month, and it will auto renew
for the following month, and you can run it for a year, not paying anything if you do
not exceed that 750 hours. That's awesome. I
will scroll further. Keeper is a keeper and I
already uploaded my public key, public SSH key, so
I can pick it here. This is public key from my
laptop that I have in UK. Now go further to
network settings, and I just have to make sure I will have public IP address, and it is enabled option. And then the rule to allow SSH traffic is already automatically created
for me anyway, so I don't have to
change anything here. And then the storage,
it's just eight gig, but you know what? It's enough. We're not going to
store anything there. We are just going to go through that server to
access anything else. That's it, ready. Let's
just launch instance. I took just a few seconds. I've got my instance
up and running. This is my public IP address, and I should be able to
access it from this laptop. So let's open terminal. They should be able to
SSH as a user Ubuntu. That's the default one
at that IP address. I just have to confirm,
yes. And that's it. I'm on that remote server. If I run Curl I
Config M command, we can see the public
IP of that server, and if I run CurliP Info to, I can see this server is
in San Jose in California. So now let me exit it.
That's really all I need. I don't have to
configure anything else. I just need SSH connection from my laptop to
that remote server. Now I'm back on my laptop. So just to confirm if I do
the same commands again, like curl IP info to, now you can see I'm indeed
in England in Great Britain, and I've got
different IP address. Hope that all makes
sense so far. Let me clear that. Maybe
I will move it here. What I have to do on my laptop, I have to first
install a shuttle. It's not a tool that
is by default here. By the way, my laptop also
runs on Ubunto as you can see. I mean, it's a Macos but Ubuntu runs in
parallel. Never mind. I run command, Sudo app install S shuttle. Now that's the thing. You have to be either
root or you have to have psudo privileges on
the client machine on this local laptop. But you don't have to have root permissions on
the remote server. That Ubuntu user is fine. But here on the client machine, I need to have psudo privileges. Let me type in the password. We will confirm yes. And that's it. Let's
clear it again. Now I just forgot to
mention one thing. You have to have on both
client and remote server, you have to have
Python installed because S shuttle is
Python application. But because Ubuntu already
has Python installed, I don't have to
worry about that. But if you use other
distribution or other system, you might need to
install Python first. Bear that in mind. But anyways, what I can do now is run
command S shuttle then R, that means I want to
connect to a remote server, and now what is
that remote server? Well, it was user Ubuntu
at that public IP address. So the user and IP is exactly what we used
for SSH connectivity. Now, what I want to do next,
let me type it in first. These are all zeros,
0000 slash zero, that means that I want to
tunnel all the traffic. That 000 says that you simply
want to tunnel everything. You want to tunnel all traffic
coming out of this device. By the way, that notation
can be shortened to just 00. This is basically
the same. Zero slash zero is the same as
the previous one. But what I wanted to say here, maybe you just want to tunnel
a portion of the traffic. Maybe you are connecting to your home and your
home subnet is maybe 1012 dot zero and it's 24. That would mean I
only want to go through the tunnel when I'm
trying to reach this network. We can go even further. I can use it to only reach one IP address on
one specific port. Maybe I've got a server
there that runs on port 80 and I want
to access only that. That means only traffic to that destination will be
going over the tunnel and everything else will just go directly to the Internet
with no tunnel. But for the time
being, I just want to tunnel everything that is
sent out of this machine. Let's just stick to
that zero slash zero. So that means what
I want to include. But last but not least, you can use x to exclude some
IP addresses or subnets. And you want to exclude the public IP you
are connecting to. I mean, this one,
you have to exclude the remote servers
public IP address. Let me do that. Have to do that. Otherwise, S shuttle will try to tunnel the tunnel that
it's trying to create. And it's not a problem
specific to SSH tunneling. If you ever configured IPSec
or GRE or any other tunnels, you probably came
across that scenario, and you know that
you need to exclude that remote IP address to prevent it from
tunneling itself. I believe I didn't
have to do it on Mac. It somehow worked
without that X, but now we run it on
Linux instances and all of them need that parameter
added to work correctly. So to be on the safe side, always do that exclusion, regardless of what system
you are setting it up. All right, let's just
press Enter now. And it tells me I'm
connected to a server. So what I can do now let
me open another session. I am on my laptop, and I will
run curl IP info dot IO. I'm still on my laptop, but it shows me that I am
in San Jose in California, which is not true
because I'm in England, but from wherever that IP Info dot IO is running
server for that server, it looks like I'm sitting
in San Jose in California. My traffic is tunneled from
England to US to California. The outer SSH tunnel
is stripped off, and then it's forwarded
further to the Internet, and the Internet believes
that I'm in San Jose. Okay, I hope that makes sense. Now, I just wanted to
show you how to stop a shuttle because you
can try to control C, you can see it not always works. So what I have to do I
have to find the PID, process ID of that program. So in this new
session, I will use P Grap shuttle command. And this is the process ID, so I can use now Kill 15180, the process ID. Now it's killed. You can see terminated, and then back to
my command line. If now I run the command again, Carl IP info, I am
back in England GB. And by the way, it's possible to create a service
file for S shuttle, and then you can run
commands like system CTR, start a shuttle or stop a
shuttle or restart a shuttle, which is a bit more graceful
than just killing PID. But that is outside of the
scope of today's video, as I wanted to keep
it short and simple. I just wanted to show
you one more thing. I just typed man S shuttle. This is simply the manual
for that shuttle utility. And you can see here, S shuttle allows you to create
a VPN connection. We already know that,
and that's pretty cool. But you can also run
it on the router. There are routers
like a Linux based, and you can forward entire traffic for entire
subnet to the VPN. That means all your
devices you have at home will use that SSH tunnel. And then subnets, we
already talked about that. You can forward I mean, you can tunnel the traffic if the destination is specific
IP address and specific port. And if you go further, you
can see loads of options, but the one that you might be
interested in is that DNS, which forwards the DNS requests, forwards them to the
remote DNS server because I used
only IP addresses, but maybe you have host names, so that's most
probably the option you want to add to your command. Just have a look, just read
through it and you will see how much more you
can do with that tool. And the last thing, I
already opened a new tab, but I wanted to
do it previously, but let's just
Google as shuttle. And that will give
you the Github page. And if we scroll down
to the very bottom, you will find further
documentation and instructions like how to run
it as a service, et cetera.