Transcripts
1. Introduction: Hi, everyone. My name
is Mariko Bukowski and I've been working
in IT for many years. Recently, I have focused on roles as a DevOps
and Cloud engineer, and I primarily work with AWS, databases and Linux servers. I have also been working with
Terraform on a daily basis, so I decided to share my
knowledge with you today. Terraform belongs
to a category of tools known as
infrastructure as a code. We will learn all about
it in this class. If you have no idea
what terraform is, don't worry, you are
in the right place. We will first look at how
to install Terraform, how to configure it,
and then how to use it. By the end of this
class, you will have all knowledge required
to start building infrastructure using
Terraform and you will also learn more about AWS Cloud and HashiCorp
configuration language. This is a programming
language used in Terraform. Believe me, this is a
very valuable skill. You can add that
skill to your CV when you apply for IT
positions, for example. Also I wanted to add, well, all of that might
sound confusing. Believe me, that no
previous experience is required to start this class. Just one thing to
point out maybe, during the class, we will
work with AWS Cloud. I will show you how to create your own AWS account and how
to use terraform with AWS, but that step is optional. It's your choice whether
you want to create that AWS account or simply
just watch that part. It is optional because I will also show you
how to work with other infrastructure
providers that's why creating your own AWS
account is not necessary. I mean, if you don't want to do that, it's absolutely fine. In fact, as a part of a project, you will write your
own Terraform code to automatically deploy infrastructure to
an infrastructure provider of your choice. Also note that if you
are keen to learn more about DevOps and cloud
specific topics, you can visit our
Automation Avenue platform where you can explore
more about AWS Cloud, Python programming,
Terraform, Linux, Docker, and many other
IT related subjects. But I think that's enough
for the introduction, as you probably can't
wait to get started. I will see you in
our first lesson.
2. What is Terraform and IaC (Infrastructure as Code)): Let's start with what
terraform actually is, how it can help us, and
what problems it resolves. As I said, Terraform is an
infrastructure as a code tool. That means we can create and maintain our infrastructure
in the form of a code. This means I can write
a piece of a code and that code will create a bit
of the infrastructure for me. That bit of
infrastructure can be, for example, a single
server in AWS Cloud. It can be maybe one virtual
server on my Px Mx. Be maybe a cloud load balancer, or it can be also a complex
code that will create entire production
infrastructure that might include
hundreds of servers. So regardless of what
I want to build, I still can use terraform
for all of those purposes. Terraform has a very wide
range of applications. It's most commonly used in cloud solutions
such as AWS, GPC, Azure, Digital
Ocean, and others, but it can work with nearly
anything you can imagine. If you check, let me
open the browser maybe, and let's search for
terraform providers. I will pick that first link at the very top, browse providers, and you will see
here endless numbers of so called providers. You can see AWS, Kubernets Azure,
Google Cloud platform. They are at the very
top because these are the most popular places where you would work
with Terraform. But if you scroll
down, Have a look. This is just the
very first page. Check those partner providers. This is just the letter A. Have a look, we've got 95
pages of partner providers. So now let's say I want to
work with Proxmx maybe, yes. We've got Alibaba Cloud stuck I didn't even know something
like that exists. Never mind. Proxmox, yes, maybe I want to
work with Proxmox. I can search here for Proxmox and you will
see there is not one, but we can see all provider. Loads and loads of
providers just for Proxmox. Any of these providers will let me work with
my Proxmax server. But let me go back. What
are those providers at all? So this provider is
like a bridge that lets me use Terraform with
that specific product. For example, if I want
Terraform to work with AWS, I will need to include this
AWS provider in my code, and then Terraform will
download all necessary tools, we'll do some magic
in the background, we'll install all it needs
to work with AWS Cloud. And in this course,
we will use Terraform to deploy infrastructure
in AWS Cloud. It doesn't mean this course
is somehow AWS specific. This is terraform course, not AWS course, and at
the end of this training, you will be able to work with all those providers because
we are going to learn terraform concepts that can be applied to any provider
you want to work with. Don't think that you will need another terraform training
if you want to work with, for example, Azure
or DCP or whatever. The fact that we are going
to work with AWS today is meaningless because
the same rules apply to any providers you want. You simply will be able
to work with all of them. I only chose AWS because AWS is the most popular cloud
provider and Terraform is the most popular
infrastructure as a code tool. I think it's a perfect combo. But now, what does it
actually mean to program the infrastructure or to have
infrastructure as a code? It simply means that instead
of manually clicking on some icons in the AWS console to build something in the cloud, we can do it all in Terraform and keep it
in the form of a code. Because Terraform is like
a programming language. You can write that program, that code in Terraform using a programming language called HCL HashiCorp
configuration language. Once you have that
program completed, you can run it and it will deploy all the
resources for you. Someone might ask, Mark, but I can build that
infrastructure manually. I can build it
directly, for example, in AWS by clicking on some icons and filling
in the fields required. Why I should even learn about
something like Terraform. Creating infrastructure
manually in the cloud is indeed possible. However, it has many drawbacks. The biggest problem with such manual approach is that when someone creates
this infrastructure manually, for example, I, I will create a server
and the next day, someone completely different
will start tinkering with this server and will
start making changes to it. Now the problem is that we
don't know who changed it and what has been changed
because we don't have the ability to
track all those changes. For example, let's say I created that server in the cloud
with a 500 gig disk. A bit later, there
was an issue with disk space and someone
increased it to 700 gig. The problem now is that
if something happens to that server and I have to
rebuild it based on maybe, I don't know, some
personal notes, then I will rebuild
it again with 500 gig disc because
I might not even know about the fact
that somebody increased that disc space to 700
gig in the meantime. So we have no trace of who
where and what was modified, it would be quite
difficult to keep all such information in one place by just
using some nodes. However, if we have
that infrastructure in the form of a program, then we can program or hard
code that information so then everybody will know that disc on this particular server
should have 700 gig. If they have to increase
it even further, they will change that
value in that code. And they will share that piece
of code with entire team. So everybody will
be aware of that. And if I ever need to
rebuild that server, I will not click
any buttons in AWS. I will deploy that server
using Terraform using the same file in which those 700 or whatever current
size is supposed to be. This size is programmed, is hard coded in
my terraform code. So I don't even have to
think about what disk, what instance type, what
IP address to assign, and so on, because I
already have everything clearly presented in
the form of a code. I can upload that code to remote repository like
GitLab or Github. My team can pull that code
from that remote repository anytime they want to work with that server or if they want
to make any changes to it. But this is just one example. In reality, keeping
all our infrastructure in the form of a program
has many more advantages. For example, it will be
very easy for us to create, let's say, two copies of exactly
the same infrastructure. If I need one for testing purposes and one
for production environment, I can just run that entire
code in, let's say, completely different AWS
account and the infrastructure created will look exactly
the same in both accounts.
3. Terraform installation process on Linux and Mac: To work with Terraform, we need to install
it first, yes. I will show you how to
install it on MacOS, Linux and Windows machines. Let's start with Linux maybe because on both Linux and MacOS, the installation is trivial already and the process
for both is very similar. So in my Linux, let me minimize. So in my Linux, this is Ubuntu. Let me open the terminal, and I will first type
Terraform dash dash version. As you can see, the outputs
command terraform not found, but can be installed with
pseudo Snap install Terraform. That's true. You can just run that command and the job done. But I'm not the biggest
fan of Snap packages, though, so let me show you
another way you can do that. Let's open the browser again, and I will search for
terraform dot IO. So that's terraform dot IO, and I will click that
download button. And you will see not only
MacOS Windows and Linux, but also free BSD, open BSD, Solaris, et cetera. But I'm interested
in Linux now and because every Linux can have
different package managers, you have to choose
your version of Linux, whatever your
distribution is, yes. For example, for me,
it's Ubuntu Debian. I can just click that
little icon here. It will copy everything for me. That's now copy it,
all those commands. So let me go back to terminal, and I just placed it. That's it. Now click Enter. Now, here
you might wait for a while. Some stuff has to be
downloaded in the background, so that will depend on
your Internet speed. Now it asks for pseudo password, so let me provide it, and now it will
finish the process. That's done, which
means if I use up arrow and run
this command again, Terraform D version,
now I should have, let's see. That's what
I was looking for. Now it shows me the
version of terraform, which means terraform is
successfully installed. But in fact, if we go back
there to the terraform IO, we can also use
the MacOS version. If we go to operation
systems Macos, we can see we can install Terraform using Brew
package manager. In fact, if I go to Linux even, you can see Homebrew
is here as well. If I click that, it's basically I get the
same information. If I go to Linux
Homebrew or if I choose MacosOperating system.
Again, I can copy that. I can now let me open
my Mac terminal. This is different machine. This is my Mac, but I should be able to
just paste it here. Yes, I am just click. White. It might take
a while as well. It's doing something
in the background. Auto updating Homebrew. Now it downloads
whatever it needs to download now it
should be completed. If I run here, maybe
let me clear that. If I run Terraform version, I will also have
terraform version. That means terraform is
installed correctly. For Windows though, it's a little bit different
story and I would have to jump on another
virtual machine where I have Windows installed,
let me do that.
4. Terraform installation process on Windows: Okay, now I'm on my
Windows machine, so I will open Firefox and type terraform dot IO
in Google Search. I will go to Downloads
and then to Windows Tab. Within that Windows tab, I want the 64 version of this program downloaded.
So I just click this. Starts downloading, and
now if I show in folder, we can see it's zipped
file. It's compressed file. So I can we click
on it and I say extract all to uncompress
them. Click Extract. Now if we go back to downloads, you can see the compressed version and
uncompressed version. If I open this, I
can see one file. It's called Terraform and
it's not the installer. It's dot exe file, which means there is
nothing to install. Clicking this file
will execute means I will simply run Terraform
by clicking this file. I click View and
filename extensions, you can see it's terraform
dot executable file. Yes. But work easily with that file from the
command line interface, I suggest we do one more thing. Let's first copy this file somewhere on this
C. Let me copy it. I can now close close.
Let's close everything. I will go to C and you might want to paste it
somewhere in program files, but to clearly show you
what we are doing here, maybe I will just create
new folder right here. I will create new folder and
I will call it terraform. And in the terraform, I will have that file terraform dot A. My path to this file is C, backslash terraform,
backslash Terraform XA. Now once we have that file in this location, we call it Path. It's C, backslash Terraform,
backslash terraform A. That C terraform is
our path to this file. Maybe let me copy this
path C terraform. I will just copy that. I will minimize this window, and I will search for
environment variables. We can see best match, edit the system environment
variables in control panel. That's what I need. That's it. And now I click that
Environment Variables. Here I need that path variable, and I click Edit. And we can see we
already have one, but I want to add another one. I say new, and I will
just paste what I copied, which was that C column
backslash terraform. And they say, Okay, so I
can click Okay here and Okay there because that's
all I want to add. That means now if
I open terminal, maybe I will open PowerShell. Windows PowerShell, and if I run terraform dash dash version, I can see I have
terraform installed because it shows me the
version of the terraform, and that's because
our Windows system knows where to find the file, the terraform e file because we've just added it
to the Path environment, and that's all it
is for Windows. You can start working with terraform on Windows
from now on.
5. Create infrastructure... but where?: We have terraform now
installed and we are ready to start creating the
infrastructure as a code. But where we can create
that infrastructure. As I previously mentioned, we are going to create that
infrastructure in AWS Cloud, but any resource or any location that has terraform
provider will work for us. I will focus on
AWS Cloud because it's the most popular cloud
provider in the world, so it makes perfect sense
to work with that cloud. If you want to
follow and practice all steps shown in
this training though, that means you need to set up your own AWS account and start deploying your
infrastructure there. In the next few sections, we will do exactly that. We will go through AWS account process creation
and configuration, and we will see
how we can connect our local Terraform to
that remote AWS Cloud, to that AWS account that
we are going to create. If you already have
an AWS account, or maybe you want to use Terraform with completely
different provider, then you might want
to skip some of the initial videos here where
we create that AWS account. And start from the one where we set up AWS users and groups, because that should give you a clear idea on how you can use that information with whatever your chosen provider is.
I hope that makes sense. Let's start creating and
preparing our AWS account then.
6. Create AWS account and log on as a root user: This is new fresh
operating system with absolutely
nothing configured, and we'll be creating
here new AWS account. So let me open the browser, and we'll navigate to
something like AWS three tier. And what actually Free Tier is, we'll talk about it later. But now, let me just
explain shortly. It's simply AWS
offering that lets you using loads and loads of
resources for a year for free. When you create new account, you can use those services
for free for one year. But we'll talk about it later when we actually
create resources. Now we're creating
the account itself. As you can see here, I have
create an AWS account. Let's click that because that's
what we're interested in. Clicking this, it takes
me to that webpage with that create free account
button. Let's click on that. And maybe let's
accept those cookies. Now what I need is
some email address. It can be any email
address use or you own, you just have to be sure you can access actually that email
during the signup process. I will just use gmail.com. This is important
bit. The one below, it's just an alias
for your account. It can be anything
you want it to be called automation Avenue. But it's not that
important really. The email is important here. I click Verify email address, and now I will have to just
wait for the email to arrive. Oh, as you could probably here, the email has just arrived, and I've received a
verification code, so I have to type it
here and click Verify. Okay. Now we have to create
the password for root user. We'll talk about it a little bit later who is actually root user, but now let's just
create a password. As you can see, it has
to include uppercase, lowercase, number, and non
alphanumeric character. Just make sure
you've got that all. Even when I just start typing, let's say capital letter, as you can see first
box is ticked, lower letter, now digit, and now non alphanumeric. As you can see, all
those boxes are ticked, my password is good
enough for them. Now I have to repeat it. And
that's it. We can continue. Just wanted to note
here on the left, you have a link if you want to explore those three
tier products. This is the link you can use. But never mind, at this stage, we'll just concentrate
on the signup process. I'll just click Continue. I will save the password. Now it asks me for
contact information. It asks me if it's business
account or personal account. I will treat it as
personal account. Anyway, if it's
business, it will ask you for organization
name as well. Let's click personal
and my name is. You have to give them name and address, and the phone number. And you agree that you've
read the customer agreement. So we can go further
to step two. Now it asks you for
billing information. That's why you have to
provide your credit or debit card number
because as I said, there are some free
services you can use, but you can actually
by signing up, you can use all services, the ones that are free, and the ones that you
have to pay for. That's why you have to put
the credit card details. But in later videos, I will show you how to create budget. This budget will inform
you if you actually started using some services
that you have to pay for. For the time being,
we have to put that information in to
be able to progress. That's the credit card,
and we can go further. As I use my Revolut card, I have to confirm that in
the banking application, I click Confirm and that
authorizes this step. Now it asks me for
telephone number to use to confirm the identity. I will use the same that I used previously on previous page. Now, the capture. Cool, I have it right. If
it's completely messy, you can also always refresh it. You know, I will generate new one because some of them are generated in such a weird way that you can barely
see what's there. Okay, now let's wait for the verification code
sent to my phone number. And as you could hear,
it's just arrived. And we can continue
to step number four. Here it asks me again what
type of account I really need. So we'll go, obviously
with the free one and want to pay $29
a month or 100. The free one is more than we
need really, and that's it. We click Complete, sign up. As you can see, it was
quick and easy process and it says, Oh. As you get here, AWS says, we're activating your account, but I believe it's
just been activated. So yes, received the email, it said that I can
start using my account. So I can click
either this button or that button doesn't
really matter. Let's click this one. Go to AWS Management Console. And now it asks me
if I want to sign in as a root user or the IM user. We really only have
root at this stage, but I will show you later on in next videos how to
create IMuser as well. So for the time
being, as there is no choice, we only
have root user. We'll put the email address we used for the signup process. For me, it was gmail.com. We'll click next
and the password, the one that you just created
as well for this user. And sign in. Not bad. We are already in
our AWS console. That's what it's
called AWS Console, where we can do everything regarding this account
or our resources. Thing I want to show you it's here that says
automation Avenue. Remember, that's the alias, we actually pasted during
the signup process. So instead of some
long weird digit, automation avenue
will be used here. For you, it can be anything you want to name this account, us. The next thing is we can see
stockhm. Well, we chosen. I don't know why, but that's
what we call AWS region. This is where you want to
create your resources. So as you can see, you can
create it all over the world. Let me just change it to London because that's
the closest for me. But we'll talk about
it later on as well. So don't worry about it now. What I want you to
do is to activate multifactor authentication
for the root user. That's very important. This account has
to be very secure.
7. Add MFA (Multi-Factor Authentication) for root user: What you can do here
in this search field, you can put IM IAM. We'll click that
and you can see, we can see that even
the Amazon itself, there's a security
recommendation one, add MFA for root user. To be almost every
single user you create should have MFA enabled, in my opinion, but
root one is definitely one you need to create
MFA authentication for. We can add MFA here,
clicking this button. And now we have a
choice what we want to use as a second separate
authentication method. The easiest way, I think,
is authenticator app, which is chosen by
default anyways. You can call, for example, for me, it will be
my Samsung phone. I can use that as an
doesn't really matter. It's just for you,
information for you, where this authentication
will arrive. And we can choose
authentication app. Click Next. And now you can see, you'll have to install one of the applications
on your foam. Either Google authenticator,
do a mobile or some other apps that the list of all applications,
you can see here. But basically, you
need just one of them. Google is really good one. I can recommend
that one. And then once you've installed
that application, you have to click that Show QR code and just scan
it with your foam. Once you scan it, you
will receive MFA code, which you will have
to paste here twice. I mean, first code
and second code. This way, you will add that device as authorized one
to receive the MFA codes. Not do that because this
is just temporary account. I will remove it later on, but you definitely
should do that. In next video, I will show you how to create
that budget before you start actually creating
any resources in AWS Cloud. But you've got your own AWS
account now, congratulations.
8. Create AWS Budget: In previous video, we
created our new AWS account. We created and logo
as a root user, and we added MFA authentication
for that root user. What is next most important
thing is that we should know how much money AWS
services will cost us, if any. We can either try to
not spend any money there just using free
tier eligible services, but maybe we are okay
to spend some money, but we don't want to exceed certain threshold like maybe
$10 a month, let's say. But one way or
another, we definitely want to be in control
of our budget, and that's what
AWS budget service exactly is being logged
onto our console, still as a road user, we can type here
in the services, we can search for budget. We can see here budgets as of bidding and cost
management feature, or we can simply click that bidding and cost
management service, which budget is part of. Maybe let's click the top
one. Doesn't really matter. Here, you usually
see some summary how much last month it cost you, what's prediction
for this month. But because this is new account, there is no data available yet, but doesn't really matter
because what we need is budgets here in the
left down corner. Let's click that budget. And as you can
see, we can create a new budget here by
clicking this button. Now we have a choice of
using a template or we can customize, using
advanced settings. Let's stick to just
simplified version. Now, what type of template
do you want to use? By default, you can
see zero spent budget. Create a budget that
notifies you once your spending exceeds $0.01, which is above the
AWS three tier limit, that sounds good, doesn't it? Because that means if I
spend any money on anything, I will get a notification. Also, that's important
to remember, the AWS budget will not disable any resources for
us. It's not meant to be. It will only notify us every time we exceed
certain threshold. We can figure here. The first threshold
will be that $0.01. If we use any service
that we have to pay for, it will send an email to the email address we
specify here below. So maybe before we go there, there is a budget name as well. It's called My
Zero spend Budget, which is okay, but let
me just personalize it. Maybe Mark. Zero spend budget. Doesn't really matter.
It's just a name. And here is where we
enter the email address. And as you can see, it
doesn't have to be one. You can put multiple
email addresses here. You just have to separate
them using commas. I will just use one, let's say, Mark at issue Avenue. That's it. If I want to another one,
just use comma and blah, blah, blah. But we'll
just use this one. That's really, I
believe, because, yes, we can leave everything
ers as it is, and you can see a confirmation. You will be notified
via email when any spend above
$0.01 is incurred. That's fine. That's
what I want. So we create a budget. And that's it. We've got our budget
and don't ask me why it always shows $1
rather than $0.01. I don't know why
it is like that, but it should be for $0.01. But that's not a big deal.
It works as expected, actually, but maybe we want
to create another budget. What if we want to sometimes
use some services, but we don't want to
exceed $10 a month? So we can create
just another budget and set threshold to that. Let's click Create budget. We'll leave it as a template simplified, which
is much easier, and now we will change
from zero spend budget to maybe this one,
monthly cost budget. What's interesting about
it, as you can see, it notifies you if you exceed or are forecasted to
exceed the budget amount. If you start a new month and on the first day you exceeded $1, this AWS budget can see you will actually exceed $10
by the end of the month. So it will send you notification before you actually reach that $10 threshold because it's forecasted for you to exceed that by the
end of the month. I hope that makes sense, because our ultimate goal is not
to exceed $10 a month. That's exactly what we need.
So now we can scroll down. Let's call it $10 budget, yeah. So it's clear for us
what it is about. Now, we just adjust
this value to $10, and again, just list of emails. We want that notification
to be sent too. And that's it. And here below, you've got a summary
of how it works. You will be notified when.
Your spend reaches 85%. So if you are at
8:50, let's say, you will get email, then
it will be at 100% at $10. But the third option, if you're forecasted spent
is expected to reach 100%. But that's very handy and useful for us. That's
exactly what I need. Just create budget. This way, you can create as many
budgets as you want and get notified every time you reach
any of those thresholds.
9. Create AWS IAM User and IAM User Group: In previous video, we
created AWS budget. We can monitor our spending using those budgets, et cetera. But what I want to
notice is we are still logged on as a root user, and that root user is
not really the one we want to use to create
anything like resource wise, like servers, load
balancers, et cetera. We don't really want to
use a root user for that. What we should use instead is
some type of admin account, admin user or any other user
that has some limited scope. So in this video, we will
create Admin IMuser. So the service I need is called IM we already saw that when we created MFA authentication when we were creating
the account itself. As you can see, I
haven't completed that step, but I hope you did. But what we want now
though, is a new user. If we work for organization, we don't usually create one. We create many and we put
them in some groups of users. So maybe let's start
from groups instead. That's usually the way, but
it doesn't really matter. We can start from any point and we'll end up
in the same place. But let's start from group. We create group so we can
create a group of users, and we will call them admins. I will be group of admins. And every single admin should have similar set of permissions, and those permissions
are configured here. And you can see at the very
top administrator access. That's exactly what we need. We take that and we
can see provides full access to AWS
services, and that's fine. We need them to access services, but they will have
limited access to view any financial information.
So we call them. Admins, and we create
group of Admins. So that's now done, but we can see we have no users
within that group. So now we can go to
users and we will create a user and place it within
that group of Admins. So let's just click Create user. Let's call it maybe
administrator. Slightly different
name. Now what we need, we have to tick that provide user access to AWS
Management Console. You can see it's optional.
And somebody might think, isn't that the whole idea
why we create that user. We won't access the AWS console. That's where we are
right now, yes? Well, yes, that's true,
but every user can have console access or what
we call programmatic access. I will show you in next video what's the difference and
how to configure that. But for the time being, we
need to access the console, or admin user should be able to access the console,
just as we do right now. We tick that,
provide user access, and then we click, I
want to create IMuser. Here we can have auto
generated password or we can have custom password. Custom, I will be
able to type it in. Now, users must create a new password at next
sign in, recommended. This is handy if you create that account for somebody else. You create user,
let's say, I know, Jack, you gave him a password,
and then you click that. So when he logs on, he will have to change
it on the first login. But because we
create this account for ourselves, I would say, let's untick it and
we will be able to use just the password
we typed in here. That's it, we can
click the next button. Save maybe know. Now at this stage, we
can add the user to the user group we created 2
minutes ago, as you can see. So let's add him. Next. As you can see, if we didn't
have group created yet, we can create one here as well. But because we already
have group of admins, we can just go straight to next. That's it. Here is
a little summary. User name is administrator. He or she will have
permissions from admin groups, and it will be a custom
password and it doesn't require the reset on the
first logan. That's cool. Let's create that user. And now we have those sign
in instructions. We can copy that
information or we can download everything
as a CSV file, and I will use this option. As you can see, administrator
credentials CSV file showed in my downloads. I can use that.
But what I really need is that URL's pretty handy. Let's copy it maybe. Still have it in
that file as well, but maybe let's copy it here. And now I will log
out from here. I'm still logged on as
a road user, remember? So we sign out we can
either log back in here. But this time as an IAM user, and we will need
account ID, 12 digits. But remember, I copied that URL. Let me open another tab. Maybe I will show you both ways. I can paste it here. Remember that long
URL, I just copy it. This actually already
has that account ID. One last thing to type. When I click Enter,
as you can see, the only difference
is, I'm already on page where I can
log in as Imuser. I don't have to choose
that and I also have account ID
already placed for me. So now I just want to
add administrator, which is the name
of the user we just created and the password we created for that user.
And now I can sign in. But maybe before I do that, let me just go back to this tab. And as you can see, it's a
bit different login page, but it works exactly the same. So the account ID, I can
actually copy from here might be easiest way. Let's go back. I can paste it here, and
here if I click next, as you can see,
we are exactly in the same place. Hope
that makes sense. So because I already
have that all filled in, let me just sign in from here. Close this one. Okay. We are
now logged in as an IM user. As you can see,
says administrator at that long account number,
finishing with 8888. You can also see one more
difference because you can see access denied
in cost and usage. As I said, IM user will have limited visibility to financial information,
and that's fine. We will create this user
to create services. They don't have to do
anything with the finances. So I hope that helps.
10. AWS CLI installation and API access keys configuration: In previous video, we
created the IAM user, and we are now logged
in as an IM user. It's called administrator, and this is our account number. But remember when it asked
if we need console access. This is the AWS console and it asked us if we want
to access it at all. This at first sight might look ridiculous because
somebody would say, of course we want
access to that, yes, but not every user
will need this access. AWS also has a
programmatic access, and everything we can do here, we can also do programmatically from some remote location. And instead of trying to explain that, let
me just show you. Simply, that's the easiest way. So let's go back
to our IM console. Im not IM. Let's go to the IM, where we created our user. And as I said, it's also recommended to identify
for the IM users, not only for root user, but let me concentrate on
something else because what I wanted to show you is if
we go to those users, maybe I will click
here, doesn't really matter where here or there. So this is our user. It's called administrator. Let's click on that. And what we need now is
security credentials. Let's click Security credentials
and then scroll down. And here what we're
interested in is access keys. These are those programmatic access keys I was talking about. Let's click Create Access Key. We've got quite a choice, but we want really the first one, command line interface because that's why I will
want to connect to my AWS account from remote
server or laptop or whatever. So that's it. I just
need to confirm that I understand the
recommendations. I click next. Here you can describe
it, but I will just click Create Access Key. And as you can see, we've got access key and
secret access key. This is equivalent of username, and this is equivalent
of password. So from remote location, I will be able to log
in to this account as administrator using
this kind of username, which is called Access
Key and this password, which is called
secret access key. It's important moment because this secret access key
will be shown only once. Let me show you
what it looks like. I will remove it later on. It's very important that you
do not show this to anybody. As I said, this is equivalent to your user name and password. So if anybody can see that, they will be able to
log onto your AWS account using those credentials.
So be careful with that. I will remove them before
this video is published. But what we can also do here is download the CSV file.
I will click that. This will be, as you can see,
administrator credentials. No, sorry, that was
for previous video. Administrator Access Keys is the one that we are
downloading now. So we will have that information in that file. So that's it. Let me minimize that and
let me open the terminal. So this is my laptop
here. I'm at home. It can be PC or laptop or
some other server somewhere. How can I now access that
AWS account from here? So what I need is AWS
CLI, it's called. Okay, we need AWS CLI, but if you type, for example, AWS version, there's very slim chance you already have AWS CLI installed. We usually have to
install it first and configure it.
So let's do it now. Let me just go back to
our browser and let's Google how to install AWS CLI. And now I'm on T but
you might Google for any operating system you're
on. So let's check that. First link, let's scroll down. Here we've got the instructions. If you're on Windows, you
would use this for Macosd, but I'm on Linux, as I said, well, it's not even normal
Linux, it's RM version, so I have to switch here as
well to RM version of Linux. And what I really need is to
just copy those commands. So I can click
those two squares, now copy it and go back to my
terminal and just paste it. Enter and that's it. Now maybe let's clear it. So now, if I type
again AWS version, as you can see, now,
I've got it available. Okay, so how to
configure it now. Fortunately didn't
make it complicated. It's command AWS, configure. Click Enter, and now it asks
you for that access key. This is the one. Remember, this is your access key.
You can copy it here. Let's go back. We paste it. Enter. Now secret access key. We have that as
well. We copy this. This is equivalent
of your password. Let's go back, paste it here. What now it asks you for
is default region name. I know we didn't talk
about regions yet, but region basically is where you want to create your
resources usually. Depending on where you are,
you will choose your region. For me, it's a London, so it's Eust two. It's not really something
you have to type in. You can leave it blank,
but then you will have to specify every single time
you create something. You have to specify which region this resource has to be created. So for me, because most
of my resources will be in EU west two, sorry, did they say one or two? It's Eust two. But yeah, because EUS one is island. So I want everything
in London, yes. So it's EU West two. But this is something
you will be able to override
later on anyways. So it's just for
your convenience, but you can leave it
blank, as I said. If you have default one, you
will not have to type it, but you will be able
to override it. Okay, so enter and
output format. Well, not bothered right now. Now, I should be able to access
my AWS account from here. So maybe let's start
with AWS help. It will show you all of the services we can access
using that command line. As you can see, there is basically everything you
can access in Console. You can also access it via CLI. And because we don't have any
resources might be tricky, but we can use that IM because
we've got user created. AWS IM and then
maybe help again. See what possible commands
we have for AWS IM. This one looks okay. List users. Okay. List users.
See what we go here. All right, little hiccup
because this is new system. As I said, there is
nothing installed here. It says, no such file
or directory less. Well, my guess is because
we have to install less. Ahoy. So do up to install less. Okay. A little hiccup. Let's try it again. And as
you can see, now it works. It knows we've got
user administrator. It gives us the user ID and some more information
like when it was created or when
password was last used. This way, we can access our AWS from our laptop
using common line interface.
11. AWS Free Tier, SSH keys and 'manual' EC2 server configuration: A Okay, here I am in my AWS Console. I'm still logged on as
IM user administrator, and the service I need
is called EC two, and you can see services EC two, virtual servers in the cloud. That's exactly what
we need. I clicked on it and as you can see, instances running
zero because I don't have any virtual servers
yet. Let's create one. To create one, we can click on that orange launch
instance button. I'll click that and now we can specify all the
details for our server. First is name and name
is really not important, but that's name it Marx server. Doesn't really matter
what you name it. It's just, you know, so
you recognize it's yours. And we scroll down
and we can see operating systems that are
available for our server. Amazon Linux is
picked by default, and Amazon Linux is like a Linux Fedora based system with a lot of tweaks from
Amazon, that's good. But we can also have MacOS. We can have ubuntu Windows. They're all called
Amazon machine images. If you click this button,
you will see there is thousands and
thousands more of them. But let's just stick
to the basics, maybe. I will just pick Ubuntu. What's important about
Ubuntu or Amazon Linux, you can see it's free tier eligible and I will talk
about it in a minute. For now, maybe I will
switch to Ubuntu, which is also free
tier eligible. But that's fine
for now. We'll get back to it. I scroll down. Here I can choose
architecture of my processor. It's 886, it's usually
Intel or AMD or RM. Is AWS has their own processors. They're called gravitons. You can use that
if you want. But I will just stick to the X 86. Here, you can choose the
type of your server. We've got Tito Micro. As we can see Tito Micro, well, let me just
maybe open this. You can see there's many and
many of them available and scroll down as you can see
lots and lots of them. But what's important
about this one is that it has also that free
tier eligible label. Let's. See this server has one virtual CPU and
one gig of memory. Below are the prices, how much it costs to run it. But I want to discuss
one more thing because we go back to it to
that free tier I mean. So let's scroll down for now, and we've got keeper. Keeper is SSH key
that we can use to connect later to our
server from remote location. Like this laptop, from
my laptop, let's say, if I wanted to
connect to my server, I will need a keeper. And as you can see, I
don't have a keeper, and it says, I can
proceed without a keeper, but it's not recommended. That's not really what I want. I want to create a new keeper. I'll click on that buttom
and I can call this keeper, whatever, Marx key
maybe. You know what? Maybe here we will change
to that ED 255, blah, blah, blah, because this is the newer and better version of that key. But both of them will work fine. Now we'll just click
Create Keeper. As you can see, it also
downloaded automatically. I will see it in my downloads, the Marx underscore
key dot p. That's the key I will need to later
connect to this server. As you can see, it's
now picked keeper Marx key. So heappy with that. Now we've got network settings, and I don't really
want to discuss network settings now because
this is a very broad topic. But if we leave
everything as it is, this server will work exactly as I want it to work anyways. So we just want to be sure that auto assigned public
IP is enabled, and that creates
security group has this allow SSH traffic
from anywhere. This way, we will be able
to connect to our server. If those bove are
exactly like here, that's all I really need.
So we can scroll further. So we've got configured storage. This is the hard
drive for our server. And as you can see, by default, it has eight gig of what they
call GP two root volume. GP two is older
version of SSD drives. We can switch that to GP three, newer version, and now I really want to go
back to that free tier. As you can see, it's
also here because the amount of storage we use will also affect that free tier. And as you can see,
it says I can have 30 gig of EBS storage if I
want to be in that free tier. I will change that eight to 30. You can leave it eight as well, but I can have up to 30, so I will change
it to 30. Why not? I want to summarize
it. If you've got up to 30 gig of GP two or GP
three root volume chosen, and then if you have instance
type that is free tier eligible and if you have operating system that is
also free tier eligible, then this server can run for 750 hours every month
free of charge. The next month and
the following month, the amount of hours
will reset and you will have new 750
hours for that new month. If this is new AWS
account and you've got one year of free
tier eligibility, then this server can run for a whole year completely
free of charge. It will not cost you anything. As long as you will not exceed any of those
mentioned here. All right, so let's go back. Well, that's it. There's
nothing else I need here. I can just launch instance. So I launch instance and my
server is being created, and it's now up and running. It's a successfully initiated
launch of the instance. I can click on that
identifier for my instance. Instance means virtual server, if I click on that, I
can see Mark server, and the status is initializing. That means it's not entirely
full up to speed yet, but it's been created. Now if I click that button, you can see more information
about that server. And one of the most
important ones for me right now is my public
IPV four address. Public address means
I will be able to connect to that server
from anywhere in the world. Let me maybe refresh that first. Let's see if this o
still initializing. That's fine. But even
though it's initializing, you can try to connect
to it already. I can click this
button here, connect. And I've got quite
a choice here. First way to connect
to my server, it's called Easy to
Instance Connect. If I click this bottom, this AWS Console will take me to that server and then
log on to the server. I can run commands now. For example, DFH is a command that will
show me my root volume, that hard drive
that we attached, and it says 29 gig
and 1.6 gig is used, so I still have
28 gig available. That makes sense because
we created 30 gig volume. So if I go here again, can go to storage, and we can see it actually is
30 gig in size. All right, so that's how it's done locally from AWS Console. But what if I want to
connect to it from my local terminal on my laptop?
Let me just resize that. So I can type terminal. And this is my laptop. This is terminal on my
local laptop here at home. I can still go here, click dot Connect, but
now I've got some hints. If we go to SSH client, it tells me what I can do to connect remotely from
my home to this server. It says open SSH client. Well, my terminal already
has or my system, I should say, has SSH client. Next thing I have to do
is locate my private key. Remember that key
that we downloaded? I mean, it was downloaded
automatically at maxkee dot p. You can
show it in downloads. Yes, it's in downloads. Here I have to navigate
to my Downloads folder. I see the downloads. And I can see this is the file that was
downloaded, maxkey dot pm. All right. So what
I should do next, I should run this command. Change mode 400 Marx key. I can click on those rectangles
to copy that command. Go back to my 1 second.
Let me make it bigger. Maybe clear that. All right. Now I can paste it and decenter. What I should do
next, connect to your instance using public
DNS and example is here. I will click that
command asHi Marx key. Past it here and this should
take me to my server. This is just standard warning. Asks you if you are sure you know what
you're connecting to. This server is not known, so it asks you if you're sure you know what
you're connecting to. But I am sure because
that's my server, so I types, Enter and now
I'm also on my server. If I type DFH, you can see exactly that this is my server because it
has 29 gig hard drive. I've got 28 gig available. That's exactly what we
saw just a minute ago when I connected locally
using AWS Console. But this time, you know, I can minimize that just
to make sure it's clear. I'm connecting from my laptop
that I have at home and I'm connected to my server that
is in AWS Cloud somewhere. And I can run some
these devices. You can see this is my drive. You can run like what htop. You can see the CPU utilization, memory utilization, and
all that cool stuff. And if I run cut, let's see Os release, I can see it is indeed
Ubuntu operating system. Okay, that's how to create the server and how to connect
to it from remote location. If you stop playing
with your server, I would suggest you also destroy it here because
once you destroy it, you also save those
hours on that free tier. Remember you have 750 hours. But if I now destroy this instance, instance
state, terminate. Yes, terminate. I'm removing
this server, as you can see, it's shutting down
because maybe tomorrow, I want to create two servers. And if total hours for all of my servers do not
exceed 750 hours, I still will not be charged
anything because that's 750 hours is overall amount for all services
that I run here. So I can run one server
continuously for a month or I can run two
servers continuously for, let's say, 15 days or maybe
three servers for ten days. None of that will
exceed 750 hours. So I still will not
be charged anything. Hope that makes
sense. All right, and I will see you in
the next episode. Bye.
12. VSCode installation process: This video, we will install
VS code VS code is an IDE, which means integrated
development environment. That ID can help you working
not only with Terraform, but also with Python or
other programming languages. It can help you working
with answerable yaml and other file formats. While it's not
requirement to have VS code installed when
you work with Terraform, it is very helpful tool and it only takes a minute
or so to install. Let's see how it's done, and I'm sure you
will like it too. It's a very popular tool and
it's very popular for aasm. Maybe in Windows first, let me open my browser. I will just search
for VS code download. Let's use first link at the top, and it couldn't be any easier. You just download
version for Windows. And see the downloader, wait a few seconds. Now
we just run it. I accept the
agreement next, next, and I will add the icon on the desktop and also make
sure it's added to path. It should be ticked by default, so it should be
fine. Next, install. If we click Finish,
it should also launch the VS code. That's
what it looks like. As you can see, it has
some highlighting, it has some auto completion and many other useful features. As I said, it's worth
having it installed. But now let's have a look
at the Linux installation. This is my Ubunto distribution. Again, I will just
open my browser. But you know what,
before we do anything, maybe let me open the command
line as well and we always should really run
through the Get update. De and up to get upgrade
as well, just in case, but it looks like
I'm up to date, so I can close it, and
now VS code download. We will choose first T again at the very top and now
instead of Windows, we choose Ubuntu because
that's my distribution. You can see, it's also
package manager for MAC. I will not go through the process on the MAC as well because it's
exactly the same. It's very basic.
The only thing is my Ubuntu is running on
parallels on my MacBook. That means I have to choose
RM version of this package. So I click this ARM 64 Debian. Yes. I'll click on that.
It's being downloaded. So I can click on it,
software Install, open. Potentially unsafe as it's
provided by third party, but I know it's safe,
so I will install it. And we need Sudo password. And it says preparing. Now the icon changed, so I believe that's all done. Close it, close it, and I
will just search for VS code. We can say it's been
installed correctly, so just open and we will have the same window it looks
exactly the same as on Windows. That's all it is about VS
code installation process.
13. VSCode extension installation: VS code is now installed. Let's just add one
small feature. Because we are going to
work with Terraform, we will add Terraform
extension to our VS code. That's the real strength,
the real power of VS code. You've got those extensions. I mean, you click those
four little squares here, it says extensions. So I click on that and you will find loads and loads of
different extensions, depending on what you
want to work with. For example, here, you've
got something for Python. You can see it was downloaded
151 million times, and Gitans is another one that you might be
interested in later on. But now we are just
searching for Terraform. And you will find not one
but also many of them. And you can try them out. I mean, one of them can
be better than other, and I personally used
first two of them. You can see one is
from Anton Kulikov and one is directly
from HashiCorp. So today, we will just try that official one
from HashiCorp. You can see in the review
it has only 2.5 stars, but I'm not sure why because
it's not bad really. I've worked with it and I found
it, it works as expected. So let's install it. We just
click that little Install, and that's it. It's
been installed. After a while, that's
now completed, and it will also tell you
what it does this extension. It says syntax highlighting and autocompletion for
terraform. That's perfect. That's what we need exactly. You will see what it looks
like once we start using it. That means now we are ready to start working with terraform.
14. Creating Workspace and terraform provider configuration process: We have VS code. We have VS
code terraform extension. Now let's prepare our workspace where we will save all our work. For example, you
can open a terminal and decide I'm in
home parallels. That's fine. What's here? I don't think there is
anything interesting. I will create new folder and
I will call it terraform. The terraform. So in my home folder, I will have folder
called terraform. This will be my workspace.
I will just close it. Here, we can click
that top icon, the explorer, and we
can click Open Folder. And we will open
that folder that we've just created,
the Terraform one. Let's open it, and it should be empty because we've
just created it. Yes, I trust this is my device. Maybe I will make it bigger, and we can configure
our provider. I already mentioned previously that provider is that bridge. That connects Terraform
with that space where you want to create that
infrastructure and we want to work with AWS, we need AWS provider. To do that, let's Google what that AWS provider
should look like. I will Google Terraform
AWS provider. At the very top, we've got
AWS provider, HashiCor. Let's click on that,
and that's it. They say that's how it's done, but they also have
the resource here. We are not going to
create a resource yet. We're just interested
in the provider itself. So let me just copy this part. We'll come on, see. Now
let's go back to VS code. And in my folder, I have
here little icons New file, and I will create a new file
and I will call it provider. I will add extension dot TF. As soon as I've done
this, have a look, the icon changed on the left
because if I click Enter, I've got a new file and VS code detected exactly it was that VS code extension
we installed. This extension detected
the terraform file, which we can also see here
in the bottom right corner. It can see we are
working with Terraform. Let me now paste the portion
I copied from that website. This part will tell Terraform
what we are working with and please note that you can have multiple providers. Maybe you work with AWS, but maybe with Azure as well, and maybe with Proxmox. You can paste here
multiple providers, but in our case, we are just
going to work with AWS, so that's the only
provider I need. But it is possible to specify more than
one providers here. And here it says, configure
the AWS provider. You might already recognize this portion region
is something we already specified
previously when we worked with IM
groups and IM users, we specified the AWS region
and I specified EU West two. Let me just press Command S on Mac or controls on
Windows, it would be. Command S to save this and
I will click that terminal. I will say new terminal.
I will open here. Now if I type AWS configure, this is something we
already did in the past. We configure the access key, we configure the secret key, which is like user and password. That is what
Terraform will use to access to be able to
access the AWS Cloud. But then if I click Enter again, I also specify the
default region. EUS two is London, actually, but maybe you want to create your resources in
different region, or maybe even I want to create this next resource
we're going to create, maybe I want to have
it somewhere else. This is the way I can override
whatever I specified here. If I leave this
setting as it is, that means my
resource, for example, server will be created
in US East one. So if I remove it, then it will use
the default value, which is EU west two. I
hope that makes sense. There was also output, but we didn't say what's the default output we
are interested in. What I will do, though,
I will match here. I will leave the entry, but I will match what I have
as a default region as well. But I will remember
this is the place where I can override that value. So command S again,
and that's it. We've got our workspace. It's this terraform folder, and we've got the provider
configuration saved.
15. Terraform init and terraform lock file: Now let's talk about
terraform init command. Terraform in it is
the first command we want to use if this
is a new workspace, and we never previously
worked with terraform. So we have to
initialize everything. Let me just run this command and you will see what I mean. I basically just
run Theraform in. That's all it is.
Let's press Enter. What you can see, it's
initializing the back end, it's initializing the
provider plugins, and it will tell us
everything what it does. You can see it's installing
HashiCur it's already done. It doesn't take long,
but what you can see here that the provider was
installed, has been installed. It's version 583 dot one. See Terraform has
created a log file. It's called terraform Log HCL, and we can see this file here. Remember, we only had
terraform folder and one file. But have a look now.
We've got another folder. It's a hidden terraform folder. There are some more files, and we also have
that terraform log. So in that folder, Terraform downloaded
all the binaries that it needs to work with AWS provider and terraform log is information for terraform, which version of those
binaries were installed? We can see it's
version 583 dot one. That's because the constraint
we gave Terraform is that it was supposed to be
version five or higher. We downloaded
version 583 dot one. What I mean, if we go
to that provider file, we can see we stated we need version AWS provider that is higher than version
five dot zero. So Terraform downloaded the
latest one, which is 5831. So this is simply additional
information about those binaries that are in
that dot Terraform folder.
16. Build first server using Terraform: That's great. We should
now be ready to create our first server in
Cloud using Terraform. I logged on to my AWS account. The region we mentioned
already is here, so Us two is London. Initially, in the
provider conflict, we had Seast one, which was North Virginia, but I want to create my
resources close to where I live. So the closest for me
is London ust two. So this is basically
what we have in our provider dot tf file. I said I want them
in region Ust two, and we already initialized
our Terraform AWS provider. So it should be able
to work with AWS now. So how do I create server here? As we can see, we
have no instances. Instances means Virtual servers, and Virtual server
is also called EC two in AWS Elastic
Compute Cloud, and everything we create
in AWS is called resource. So what I can Google now, I can Google Terraform,
EC two resource. And the very first link, it says AWS instance. Well, that's exactly
what I need. In fact, if we scroll down, we will see many examples of how we can create
an instance in AWS. But let's go back
to the very top. Basic example using AMI lookup. But you know what? I will
make it even simpler. We are not going to
even use AMI lookup. I will just hard code the AMI. I will show you what I mean. I will basically just need
this like five lines of code, and I will just come and
see on my MC to copy it. I will go back to my VS code, and I will create another
file here, new file. I will call it server. Dot Tf. Again, terraform file. I will paste it
here. What's that? First thing, first
word is resource. As I said, anything that we want to create is called a resource. The second part is what
type of resource it is, and it's AWS instance. Instance means virtual
machine or in other words, elastic compute cloud,
easy to instance. That's how AWS calls it simply. That means we are going to
create a virtual server, and this is the name. I
can call it whatever. That's maybe change it. No web, maybe not web server. I want to call it
Mark or Marx server. I can call it whatever.
Doesn't really matter. The AMI is something
I can specify myself. What is AMI? We already
dealt with AMI. Not sure if you remember,
but if we go to AWS, we were creating a
server manually. We then clicked that
launch instance, and this is where we choose our operating system and
we chose Ubuntu there. Every operating system here will have Amazon machine image ID. This is this Amazon
Machine Image identifier. AMI is 05 blah, blah, blah. You can see it also here every
single operating system, example, Red Hat, it will have different Amazon machine
image identifier. If you go to Ubuntu, maybe I want to build my server based on
Ubuntu operating system. This is the identifier I need, and I can simply copy it
and paste it to my code. I just have to add it here. I have to add it as a
string data type string, which means I have to use
double quotes and I use Command V to paste the
Amazon machine image ID. Next is the instance type, and it's configured to T
three micro by default. That's what we copied, I mean. But if we go back again
to this AWS account, we can see remember that
three tier eligible we were talking about when we
were creating AWS account. We check the T three Micro, T three Micro doesn't have
that free tier eligible tag. That means we will pay for that. If you have new AWS account, you've got that free tier. So I would suggest
to use T two Micro instead because that's part of free tier eligible services. So let's go back and
I will change that T three micro to T two micro. And then the tags, this name
tag is simply what you would put here where we were choosing a name for our instance.
So I will change it. You can leave it, of course, but I will change it to what? Mex first server. All right. And now I
just do Command S, and I have saved
everything that I need to create my first server.
17. Terraform plan and Terraform apply commands: Okay, we have
everything we need. So how do we create that server? Let's first go back to our AWS. Let me go back to show
the number of instances. As you can see, we have
no instances running, which means we have
no virtual servers. If I click on that,
you also have to make sure you are
incorrect regions. We are going to create
our server in London, which is EUS two. So
let's monitor it. I will just leave it as it is, and we have no
running instances. I can even remove that, which
will show me all instances, even if they are stopped, but we still don't have any. So now I will go back to my code to VS code,
to my terminal. I will clear it maybe. And first command I might want to run is called Terraform plan. And Terraform plan will
show me everything it's going to build
in AWS Cloud, but it will not apply
those resources, which means it's only
information for me. Let me just run this command and you
will see what I mean. I click Enter and the
plan is being created. All right, that's the plan.
Let me make it bigger. What it says? It says that
Mark server will be created. The AMI will be this. That's because we
specified that. It's the Ubuntu AMI. The instance type
is T two micro. And the name will be
Mark first server, and everything else, Terraform says it will be
known after apply. It doesn't know yet
what it's going to be. Once we apply that, only then we will find out
what's the value, for example, for
public IP, et cetera. If we scroll further,
the plan is for one resource to add because we're creating
one virtual machine, zero to change, and
zero to destroy. We will see that later
on what it does. Let maybe. Now if I do up arrow, instead of terraform plan, I can run Traform apply. What Apply does, it basically will run
terraform plan anyways, first of all, but then we
will see the difference. Let me click just Enter.
I will click Enter. We can see exactly the same plan as with Terraform plan command, but now it asks me, do you want to perform
these actions? Terraform will perform the
actions described above. Only yes will be
accepted to approve. If I say yes now, Terraform will create
a virtual server for me in AWS Cloud. That's what we need. So let me click Enter and let's
see what happens. Let's wait for a few seconds and it says the server
is being created. That's it. 13 seconds. We've got the
instance identifier, which is identificator
for my virtual server. You can see it ends
with eight CE. Let's go back to our AWS
and let's refresh it. Have a look, eight CE instance
ID. This is my server. It still says initializing status check, but
it's already there. Instance state is running, and all this information
like public DNS, public IP, et cetera, remember, Terraform couldn't figure
out what it's going to be, because we didn't
specify those details, it's simply randomly
chosen by AWS Cloud. That's it. That's our
instance up and running. And if you are interested,
let me clear that again. Now happens if I run
terraform apply again. What do you think will
happen? Let's have a look. I will press Enter, Terraform
check what's in the code? What's in those files. You can see there is one
server that we want to have. And this server is
already up and running. Terraform can compare what's in the cloud and what's
in the code here. And if the desired
state is matching, this is called declarative. You declare what you want
to have in the cloud. I want to have one server, and that one server is already there.
It's up and running. So terraform says no changes. Your infrastructure
matches the configuration. Whatever we configured here
is already in the cloud, so there is nothing
to do, and that's very important to understand. We're going to create more of those files with
different resources, and if one of them is
missing, for example, terraform will only recreate
that one that is missing. So the infrastructure will
match the configuration again. It's always the
desired state, yes, and that state is now
kind of confirmed. So there is nothing
for terraform to do.
18. Terraform destroy command: So we have our instance.
It's up and running. Yes. Instance, I
mean virtual server. If I refresh it now, status check also changed. It says, both
checks have passed. My instance is running,
and it's cool. But what now if I want
to remove my instance? I don't want it to be
running any longer. I want to destroy that instance. And remember, when
we did it manually, we chose that instance state,
terminate, delete instance. But in terraform, I use
the destroy command. So I simply say Traform destroy. And terraform
destroy will simply remove whatever we
have in our code. Let me run it to show
you click enter. And before it destroys anything, it will ask us if that's
what we really want to do. Do you really want to
destroy all resources? Well, we've got only one to
destroy, as you can see, because it also creates a plan, but this time it's
a destroy plan. And if we scroll further up, we can see AWS
instance Mark server will be destroyed with this AMI, but you can see now terraform
holds much more information about that instance because whatever AWS Cloud
assigned to us, Terraform is aware
of all those values. This is our public
IP, et cetera. So if that's the instance
I want to destroy, I just say yes, and the click Enter says
destroying that instance. Oh, it took a while, 41 seconds, but you can see it's
now been destroyed. And if I go back to the
console and refresh it, we can still see that instance, but the state is terminated. And this will disappear
after a while, but this is just a
confirmation that this instance has
been just removed. So let's say now I want to quickly build another instance. If I do Terraform apply
again, I will say yes. Another instance is being
created for me by terraform, but this instance will
have different identifier. That was quick, 13 seconds, and now we can see it
ends with zero EA. So if I go back, refresh again, now we will have two entries, but only that zero
EA is running, and the previous
one is terminated. That's how you create and destroy resources
using terraform. Let me destroy this
one as well because I don't need it up and
running. I will say destroy. So yes, in this
new instance that zero EA is also being destroyed, which means I will shortly have two instances with
stated terminated. This one, for some reason,
took much longer, 1 minute, 31 seconds to destroy, but eventually it was destroyed. So very fresh. We can
see two instances, but both are terminated.
19. Create AWS VPC with Terraform: Now let's think about it. We had that server created in AWS, and this server had some
private and public IP address. Where did their
addresses come from? The private IP
address is actually part of VPC and
part of a subnet. Because AWS, let's go back to AWS and let me
duplicate this tab. AWS creates a default
networking for us. All the network is created
for us in the background because if I'm not network
engineer and I just want, for example, to run
one web server, maybe I'm not worried about
all the networking behind it. Then the AWS will
put that server in the default VPC in
some default subnet. Let's search for VPC. So service VPC virtual
private Cloud. This is basically a space where everything all your
resources will go will be placed unless you specify otherwise.
And this is it. You can see VPC ID, you can see the IPV four
sir which is the IP prefix. So our server had private IP assigned from within
that IP prefix. And we have some routing table, the HCP option, blah, blah. And if I duplicate it again, you can see that VPC, if we click on that VPC, you will also see three
subnets created within that VPC and our server will be allocated in
one of those subnets. You can see subnet
Est two A, two B, and C. Let's click one of them, maybe A and they will
have different IP prefix. Again, this is
smaller IP prefix. It's a prefix that fits within this larger
prefix of the VPC. I know it's not AWS
training and it's a computer networking
training but I just want you to
know that by default, our server will
simply be placed in the default VPC and
one of those subnets. But that doesn't mean
it has to be like that. We can create our own
VPC, our own subnet, and we can state that
we want our server to be placed in one of those subnets that we
are going to create. But what's even more interesting and that's what I want
to show you really here is that the terraform will have to resolve
some dependencies, and I know it might
be a bit confusing. At this stage, don't
worry about it. You will understand what I mean when we start working with that. Let's go back maybe to VPC. We can see one VPC. It's only that default one, and it has cider of
172 30 100 slash 16. But we are going to
create our own VPC. So I go back to my terraform. I will create maybe new file. I will call it vpc dottf and now we can
Google how to create. I mean, I will Google
TorraformRsolves, AWS VPC, Virtual Private Cloud. The first one, the top
is the one that I need, we've got some examples. I could use this one, but I can see the second
one has a name also, so that's something
that's already included. So maybe
I will use this. I will just copy it here,
clicking this button, I will go back to my code,
and I will paste it here. Maybe we make this smaller. The cider block is
this IPV four cider. The default VPC has 1723 100, and in that code,
we've got 100 00s 16. It's fine. We can
leave it as it is. VPC, I will call it not mine, but I will change it maybe
to Mark or you know what? My VPC, M underscore VPC,
something like that. Then we have instance tenancy. I'm not sure why
they added this. If we go back to VPC resource, if we scroll down, you
have argument reference. It's basically showing you what you can add here,
what you can specify. And as you can see,
instance tendency is optional, so it
doesn't have to be there. And anyways, default is default. If we didn't have anything here, the instance tendency would be set to default anyways.
But it doesn't matter. It can be left as it is. But name for my VPC, the name that will
be shown in AWS, actually, I will change it to
my PC, something like that. I will command S to save
it and just to test it, I will go back to my server. But what I will do, I
will comment it out. So I will use Control
and forward slash to comment everything
out, I will save it. This will be seen by
Terraform as a command, so it will be ignored
simply because I want to concentrate
on my VPC only now, and I just want to test
if it works as expected. I will just run Theraform Plum, and you might be wondering
the other thing is, you might be wondering, Mark you don't have the provider
or anything, what is it? I mean, remember that provider
is actually separate file, and it doesn't really matter
that it's separate file. The thing is, I am creating
separate file for everything, separate file for
server, separate file for VPC, and for provider. But in fact, you could
have all the configuration in just one file called
whatever dot TF is. Doesn't matter what you call. It only has to have the
extension of dot TF. But Terraform actually
reads everything that is inside that folder in that workspace we created
in Terraform folder. It doesn't matter if
your configuration is in one file or in 100 files. Terraform will read all
the information from all the files first and only
then it will start acting. It will start creating whatever
you have in those files. That's very important
to understand as well. I also wanted to point out is there is a new file called
Terraform TF state. It was created when we
created our server and we will talk about it shortly because it's very
important file as well. But for the time being,
we are just creating the VPC and the provider is already specified
in separate file, but it will be
read by Terraform. Anyways. All right. So I
will run Terraform plan. Let's see if it will work, and looks like it does. One to add because I
commented out the server, that means I'm only creating DVPC we can see the ten of 0016, the instance
tendency is default, and maybe let's have a
look ID known after apply. Also, ARN is known
after apply, et cetera. There are many
things that are only known after this
resource is applied. I mean, it's pushed to the AWS and actually
created there. That ID is very important because we will need it later
on when we create our sub. For the time being, let me
just re run the comment, but I will now apply. I will say yes this
time, one to add. Yes, that's my VPC, and it should be
called my VPC when we see it in AWS Cloud, so yes. Shouldn't take long. It's
already done 2 seconds. So if we go back
to VPCs and if we refresh that site,
that page, now, I've got my own VPC, and it's named my VPC, and the cider is
ten.00 slash 16. This VPC ID is what I mentioned. VPC ID, Terraform said
it will be known after apply because that's something
that AWS creates for us, and Terraform can
only read it once this resource is actually
created and it's in the cloud.
20. Create subnet and Terraform dependency tree explained: Why I was talking
about that VPC ID if we haven't used
it anywhere yet? Well, that's because
we are going to use it now when we create a subnet that will be
part of this new VPC. Because if we currently
go to subnets, you can see them here
on the left under VPC. If we go to subnets, currently, we've
got three subnets, but even looking at the
cider at the IP prefixes, we can see they belong
to that default VPC because the IP addresses
start with 17231 something. We also can see
here that they all belong to the same VPC,
virtual private cloud. We created our own VPC, but we haven't got any
subnets there yet. This is my VPC, but it doesn't have any subnets.
Let's create one. Let's go back to
our code and maybe before I create another
file called subnets, maybe we will edit here first. I will show you the
reason for that. I'll go to Google.
We are looking now for Terraform resources. AWS subnat be this link. And this is how I can
create my subnet. But if we check maybe
a little bit further, you can also specify
availability zone ID. It is optional, but it's
something Well, 1 second. I believe if we go
back to instances, I mean, I know it's not the
results we're looking for, but if we scroll further, I believe, have a look.
This is the subnet. But it also has specified
availability zone. In fact, it also has separate network
interface where we can specify IP address
for our server, and this is the configuration
for the server itself. You know what? That's cool. Let's use all of that, really. So first I need my subnet. I will copy just that let me
first paste it here though. I want to show you something. We added a subnet here, configuration, and
it's called My subnet. That's fine. We can leave it. We can call it whatever we want. But where is this
name even used? If I hover over, it
says reference name. This reference
name, for example, this M subnet is used if we have to reference
this portion, this resource in
some other resource. And look, it already
happens here in this line. For example, I want
to create a subnet, but that subnet is part of wider or larger VPC and in fact, I want my subnet to be part
of this VPC, this resource. What I mean, if I call my
resource, for example, Mark, then this reference name will have to be
changed here as well. Somebody might think,
what's going on here? Why is it done like that? Remember that ID the VPC ID is only known once that VPC
actually exists in AWS. Terraform has to
create the VPC first. It has to wait until AWS
assigns VPC identifier, which can only then be
referenced here in this line. We are say want this subnet
to be part of VPC resource, and that resource,
we call it Marek. That's our name for that
resource, reference name, and Terraform needs only ID
because AWS will provide ID, ARN, blah blah, many
different things, but we need only to reference
ID here in this line. That's why if we go
to that instance, we can see in this example, we only need the ID. Whatever I call this VPC, let's put it back my VPC. I also have to then change
this reference name here. Again, my VPC, it has to match. Imagine as if I removed
those quotes and just put dot here,
something like that. AWS VPC, my VPC ID, AWS VPC, dot, my VPC, and then I only need dot ID. I know it's confusing. I understand that. This
is all done this way, so Terraform understands that it will have to
create VPC first, and it will know that
because that VPC is referenced later
on here in this line. It creates so called
dependency tree. All right, I know. Now
cider block for my subnet. Let's say I want
10.0.1.0 slash 24, and because we create everything
in EUwst not US West, I have to change here
the availability zone to EUwst to A because we have
three availability zones, A, B, and C and the name, I will change to my subnet. I will run command S. Let's run teraform apply,
may be clear first. And it's one to
add Aha because we didn't actually destroy the
VPC. The VPC is still there. It only will add
subnet in this case. And you will see
that subnet will also have its own identifier, and it's also known
only after apply. So maybe for the time being, let's just run it, yes. And it's added. That's 1 second. Even quicker and quicker. So now if I go to subnets, I should have one more subnet, my own subnet. And here it is. It's called my subnet. It's part of VPC, my VPC, and the IP prefix
is ten.01 dot zero. But before I forget, let's now destroy all of them. Terraform destroy,
and I say, yes, two to destroy because we will destroy VPC and
subnet this time. Yes. Have a look at the plan. VPC will be destroyed
and note that Terraform knows what is the ID now because that VPC
has been created. It finishes with 166e, and indeed, my VPC
ends with 166e. And the subnet also got its own ID but it looks
completely different. Finishes with 71 B. So if I go to subnets, we can see that indeed it's 71b. It's this identifier, and they both will be destroyed now. So I say yes, Enter.
We've got two destroyed. So if I go and refresh, we are back to the
default VPC and subnets, if I also refresh, we have only those
three created by AWS.
21. Create server, subnet and VPC and more about dependencies: Now if we go back to our code, remember that we commented
out that servers. Let me uncomment it with Control forward slash and command S. We were looking at that example in AWS instance. We have our VBC. We have our subnet already, and this instance, we can see it has that
additional thing, network interface where we
can specify what private IP we need assigned to
that virtual server. Let's copy this portion as well. Let's go back to our code. For now, I will paste it
below make it bigger. Now, I already
have AWS instance. The only new thing here really
is that network interface. So let me copy that portion, and I will paste
it in my instance. That's it. That means I
don't need this thing. Get rid of that portion. So what we have now is
exactly what we used to have but with that
network interface added. This network interface again, has that dependency that
terraform has to resolve. Because network interface ID is AWS network interface
dot f dot ID. What that means is that terraform has to create
this resource first. Notice, when I
highlight this portion, VS code also highlights
that portion. It knows that extension we
installed for Terraform, it knows these two are related. The same if I
highlight this four, these are those dependencies. Yes. That's why it's
much easier to work when you've got that
terraform extension added. Again, if I change this resource
name to let's say Mark, then I also have to
change it here to Mark. Terraform knows that
it will have to create that network
interface first because this ID will be only known once this interface
has been created. And now private IP, I can
choose whatever I want, but it has to be
part of my subnet. So 1001 maybe 20, I say. Why did it say 10.0.1.20? Because, again, that
network interface has another reference, another dependency, and
it's AWS subnet, my subnet. If we go back to VPC, we can see this results here, AWS subnet, my subnet. We can see it's in
different file, but as I said, it
doesn't matter. Terraform will read
all the information, everything that is
inside this workspace, inside this folder first. This subnet has a
prefix of 10010. That means Ids, my private
IP address has to be within that scope and 100 dot
120 is within that scope. Then the name maybe I will
change it to my interface. As we can see, we've got
more and more dependencies. First, it will have
to let me maybe command S and the terraform
will have to first create the VPC because VPC is already
referenced in subnet. Once it creates the subnet, this subnet is then referenced
here in network interface, and then that network interface
will belong to my server. And we are ready to run.
But before we do that, let me go back to VPC
because this subnet, we can also just
cut it from here, and I will command S, and I will create new file, and I will call it subnet. Dot TF and I will paste it
here. In fact, you know what? I will show you
how easy it is now for us to create second
subnet if we want to. Wherever you are, whatever
you want to copy, whatever you want to create, you can just copy
it based below. For example, I will
call it my subnet too. It will be part of the same VPC. I will change the cider
block to 10.0.2.0 maybe, and maybe I want to have this subnet also in
different availability zone. I maybe change it to B. I will also call it my second subnet, maybe or something like that. And this one I will call my first subnet that job
done will command S. Now let's see if I
messed up something. Let's clear everything and I will run the form apply maybe. As we remember, apply will
also create a plan anyways. We will see if everything
is fine or not. I will run it. Looks
like everything is fine. It says five to add. And what's going to add, let's go back to the very top. And what you might
find interesting, it says first that we'll
create Marx server. Yes. Well, that's not true.
It's simply at this stage, it didn't resolve these
dependencies yet, but you will see very
soon what I mean. If we go further,
we've got instance, yes, we've got that server, then it will create
network interface for that server as well.
As we can see, it's here. It will create the subnet. It will create second
subnet and VPC. However, when we run it, I say, yes, if you concentrate,
I click Enter. It will start from VPC, and then it will do subnet, as you can see, then it will
create network interface. Then it's second subnet because second subnet wasn't
referenced in that interface. So it doesn't matter when it's
created, that second one. But first subnet had to be created before
interface was created. And server was created at the very end because all those dependencies
had to be resolved. Server had that
network interface, which was part of this subnet. That subnt was part of the VPC. That's why it had
to do it in order. All right. Let's check if we
have everything we wanted. VPC, I will refresh. Indeed, I've got my VPC with that IP prefix.
We go to subnets. I refresh. We've
got two subnets, my first subnet
and second subnet. And if we go to
instances, let's refresh. We've got Mark's first server. Well, it's not first, but that's our tag. But you can see it has private IP before
address of 100120, which is exactly what we specified here in
our server portion. That's our private IP.
Everything works as expected, so we destroy stuff. I will clear and I will
say, Theraform destroy. And it also will be
removed in some order, checks everything, and
it says five to destroy. Yes, that's exactly
what we want to do. So I say yes, and it
starts with subnet two, which is not referenced, then Mark server with the interface. Now the interface itself, then the subnet
that was referenced by that interface and
the VPC at the very end. So the order is exactly opposite to what it
was when we applied, that infrastructure.
Hope that makes sense.
22. Terraform tfstate file: So we know how to configure
multiple resources and how one resource might depend
on another resource. Like here, this
network interface will only be created once we
have subnet created. But how Terraform knows what is created in AWS Cloud
already and what is not? That's all about this
Theraform TF state file. When you create
something in the cloud, the AWS will produce a
bunch of information about that resource and Taform we
then store it here locally. In this case, we store it
locally on our computer. As we can see, this
file is now very short. But let's now recreate our infrastructure
again in the Cloud. I just run Terraform apply, and we will see how this
TF state file changes. So I just press Enter, and it says five to add
because we will add VPC subnet and the network
interface plus our instance. I say, yes, that
sounds about right. So click Enter, and this
infrastructure is being created. You can see the
differences already. Something is being
written to this file. Now the process is completed, let's have a look what's
in that Tf state file. As we can see, it's much
longer now. Look at that. Loads and loads of
information about the resources that were just
created in our AWS Cloud. For example, I can see there is loads of info about my VPC, for example, what's
the cider block? What's my routing table, ID, network access
list, et cetera. The fact is, you usually don't read that information
directly from this file. It's not really user friendly. It's just amount of information is basically just overwhelming. You would do instead,
usually here, let me clear. You would run like
terraform state, and then we can do dah dah help. We want information
about our state file, this TF state file. So we've got
subcommands like list, show, remove, push, et cetera. Let's see what list does. Terraform state list.
What we have here, they are our resources because we can see
we've got one VPC, we've got two subnets, interface, and the instance. If I want to get more
information about my instance, for example, I can
do Terraform state, show, and now I will
copy just that. And it will display all information
regarding my instance, about my virtual
servers in the cloud. And now, all this information is kept in this TF state file. So if I run again,
let me clear again. If I run now Terraform apply, what do you think will
happen? Let's present. Terraform was refreshing
the state means it contacted AWS using those
API keys we created. I checked what is already
there in the cloud, then it compared that
output to what it has here in this TF State file and because everything
matched perfectly, it told us, no changes, your infrastructure
matches the configuration. This TF state file currently is kept here locally
on my laptop. The problem is, it's
usually the case, especially in bigger
companies that you are not the only person responsible for maintaining that
infrastructure. You might have, I don't know, five or ten people working, maintaining and upgrading
the same infrastructure. What you do usually, you
send those files like provider server dot Tf
files and pc dot Tf. You would keep them remotely in GitLab or GitHub, let's say, so all other members can
download those files, so everybody or every member
has exactly the same code. But now let's think what
happens if both me and one of the other members start changing something in
that infrastructure. For example, I want to change something
regarding my server while other person wants to change something
regarding the subnet. That might cause
a problem because my server depends on
what is in the subnet. If something in the
subnet changes, it will affect my server because we remember we've got
that network interface, and that network interface is dependent on the
subnet configuration. So basically what we need, we want only one person at a time to work on
this infrastructure. Even if both people
do some changes, only one person
should be allowed at a time to change
the infrastructure. And once one change
is completed, only next change can happen, and we can use
that TF State file to achieve I mean to use
it as a solution to that. If we work in the team, this TF state file is
not kept on my laptop. It should be kept somewhere
remotely where it can be constantly updated
and it can be locked. So if one person works
on the infrastructure, that person will
lock this file so nobody else can read
or write to that file. The other person would
have to wait till I, for example, finish my changes. This file is updated, only then I will unlock it
and the other person can take over and lock the file
again. How we do that? Well, there are many ways
we can achieve that. But because we are
working with AWS, the most common solution is to keep this file in St bucket. It's simple storage
solution within AWS, where we can store the files. I can show you what
it used to look like. We have to have
new segments like terraform backend call it tree. It's T bucket within AWS, and this is the
bucket name where we want to store
our TF state file. We want to store it
encrypted. Or you know what? Maybe it should be encrypted, but let me remove that for now. We might want to have
a look inside that. So I'll just remove it. But it should encrypted
basically, yes. In region EUS too, because everything else
is kept there, so why? Then as I said, you have to have that lock mechanism
for this file. We used to have separate
DynamoDB table. DynamoDB is like a miniature
database, let's call it. And when anybody opens
a file in the database, this entire file is locked. That was the mechanism
that we used to use to lock the TF state
file in ST bucket. But that also meant
that we have to maintain one more thing in
our AWS infrastructure, which was that DynamoDB. With the newest terraform
from version 1.10, and you can check your version
running raform version. You see I on 110 dot four. F one dot ten, there is a
new way of doing it and you can replace that DynamoDB
with us Log file. You can see form extension
helps us with that. We've got this auto populated. Let's click on that and
we want it to be true. We want to use it, that log file locking mechanism
within Tree itself. All right. That should be it. I will just command
S. Now I need this bucket to be
existing in AWS. Let me copy this value because
I don't have that bucket. Let's quickly create
one and I will go to ST Bucket is like a placeholder within AWS where you can simply store stuff. I've got some buckets. I don't remember
what they are for, but I will create new one. I will paste that name
here and scroll down. It's also advisable to
enable bucket versioning. So all the older versions of TF State file
will be preserved. If somebody overwrites this file with
something incorrect, you will be able to go back to the previous
version of that file. But that's up to
you. It's advisable though, to have it enabled. And I just create the bucket. The bucket is here, so I just go back to my code, and
what I have to do, I have to run Terraform in it again to initialize
entire repository. So I run Terraform in it. Enter initializing the backend. Terraform noticed that
we are going to change the entire backend configuration where the TF state is kept. I asked me, do you want to copy existing state
to the new back end? Yes, that's what I want to do. All right. Initializing
successfully initialized. So from now on, if
I go to my bucket, if I click on that, we can see
why why this prod is here. That's what I added
here in the key. For example, you might
call it prod dev test, or you can call it whatever
because maybe I don't know, you want to keep
TF state file for various environments
in the same ST bucket. So maybe you don't want to
keep just terraform TF state, but also information what
environment it is for. So that's why I added prod, but it's optional,
it's up to you really. So now if I go to that prod, this terraform TF state file
is from now on kept here. And if I start working on that, if I do any changes, this log file will
prevent anybody else to do the changes
at the same time as me. Only one person will be able
to do the changes at a time. So I hope it makes sense.
23. Variables: Now let's talk about variables. Nearly every
programming language has the concept of variables, and HCL from HashiCorp HashiCorp configuration language
is no different. It also has concept
of variables. So let me first destroy
or run terraform. Destroy because we
forgot to destroy it. Also, you can add auto approve. It will not ask you if
you want to do that. Yes, I'm sure I want to do that, so we can add auto approve. Let's destroy that
configuration, that previous
infrastructure, and let's talk about
those variables. Let's have a look at the
server configuration. I've got this instance. I've got the network
interface for it. Maybe in this situation,
it's not that clear why we would
want to use variables. Let me change to what we used to have previously. We had
something like that. Our conflict was
much simpler and we had multiple instances. I will copy that. Paste
it here and this will be, let's call it Miser shorter. My server one. My server two
and also here my server one. My server two. If you had any experience with
any programming languages, you know that you
would use variable in any place where you have
repeated information. What's that repeated
information here? Well, I can see too. It's AMI because we
have it here for server number one and server number two plus instance type. It's T two Micro. Because if I wanted to
change that instance type, I would have to go to every single line and replace
it with T T or T three. Or et cetera. Well,
with two servers, maybe it's not a
problem, but what if I have 1,000 servers. So that's where we can
use those variables. And how do I use variables
Terraform? With HCL. What I can do, I can copy, for example, my AMI, copy it, and I replace this value with VR dot and let's call
it my instance type. I could call my
variable also AMI, but I want to make it different. So I want you to clearly
see what is going on here. And then at the top, maybe, I will add another line. I will call it variable, and my variable has to be
in the quotation marks, and I called my variable, my instance type, and I
will use curly brackets. And within that curly brackets, I can specify the default value. I say default equals, I will copy paste my AMI. However, I want to
have it as a string, which means I will
add quotation marks. Again, we basically
have the same thing, but now it's stored as variable, and now I can replace all
of those lines that have AMI with value of that variable or with that
variable name, I mean. Let's replace that
instance type as well. It's T two microcurrently. So I will say, I want to create new variable and I realized I did
something stupid. It's not really instance type, AMIs, my instance OS, let's say, let's call it my instance OS and my instance OS because
it's operating system, AMI, it's operating system, and I want to call my instance
type, my instance type. Yes. That's more accurate. Or maybe I will call it just EC two type, so it's different. And also here, EC two type. Now I create another variable
I called it EC two type, and I specify the
default value here. Or you know what? Maybe I
don't. What if I don't? Let's see. Command S, I saved all of that
and see what happens. I've got default value for AMI, but I don't have default
value for EC two type. So let me clear again and
let's run Terraform apply. Let's Enter. We can see Terraform found the
variable for AMI, but it couldn't find the value for variable called EC two type. So it simply asks
us, Enter a value. What do you want this
variable to be configured to? So I'm saying T two micro. As I click Enter, Terraform now should have
everything it needs. And it says, again, five to add, and if we check what
it wants to add, we can see it wants to
add instance type where the type is T two micro.
All right, but I say no. Apply cancelled, and I want to specify
that default value. So default equals T two micro
and if I command S again, if I run the reform apply now, now it has all the
information it needs, it doesn't have to ask me for the value for that variable. It goes straight to this stage. All right, but I say no again because that's all I wanted
to say about variables.
24. tfvars file explained: Now when we talk
about the variables, there is one more file that
you might be interested in. And it's a file with
TF vars extension. So let me show you create
let's call it maybe Terraform, but we will use dot
TF vars extension. No TF but TF Vars, yes. And this file is only
to keep the variables. But somebody might ask, Mark, but we've got the
variables already, yes. We've got them here.
So what's the point of another file that will
keep only variables? Well, there are various reasons that you might want
to use the TF vars. Let's say, what if I have variable that is very important
for me, like password. And then maybe you've got
default value as well. My secure password. Maybe that's something you don't want to keep in Github or Gitlab and share with just
anybody or anybody who has access to Gitlab will
also see those passwords. So what you can do, you
can just leave it blank. That's what we did with
EC type initially. Remember, you can maybe keep this password blank and instead, go to the TF vars and only say password equals my
secure password, and then maybe you want to keep this TFRs somewhere else in some kind of vault that only some engineers or maybe
only you have access to. So nobody else will be able
to grab this information. But also, maybe you want to have the TARs to simply be able to override in an easy way the default values that
you have in variables. So let me show you what I mean. Remove this password maybe, and let's stick to
what we already have. I will remove this.
We've got EC two type. Also I can simply maybe I will just want to change the EC
two type or just copy it. The notation is
slightly different. I think you noticed already. If we go to Tavars, I will just paste it here. I'm not saying it's variable, and it's not in quotation marks because we already
know it's variable, it's Tavars, it's
only for variables. I will only say equals
and then as a string, I will keep maybe R four large. That means I want to overwrite
whatever is in this file. The default value is Tito micro, but I want to overwrite
with R for large value. Let me just command
S. Let's see, clear, and if I run Terraform apply, let's
see what happens. We've got five to
add, but what is it? We can see it wants to
create the instance, but this time, the instance
type is indeed R for large. Be careful with that because
this is not free tier, you would have to
pay for that, yes. But if I say yes, The
instances have been created, and if we check them in the
cloud directly refresh, I can see they are no
longer Tito micros. They are indeed for large. But to be on the safe side, let me destroy them immediately. So it might cost me but it will be negligible
amount of money. A form, destroy, auto approve.
25. Create 100 servers with 'count' meta-argument: Now let's say I want
to create not one, but I want to create 100
servers in Terraform. How do I do that? Well, maybe, first of all, let's
get rid of that. I don't want to have
them as for large. Or maybe you know what? I
will just remove entire file. We don't need it
anymore. Move to trash. So Tito micro is our
three tier default value for the instance type. Okay. And now I want to
create 100 servers. Somebody might say, Yeah, Mark, you just have to copy this like 50 times and this will
give you 100 servers. Well, that might
be the solution, but definitely not something
we want to do in terraform. In fact, I don't even
need two of them. This is already
repeated information. I only need really one
resource, AWS instance. I can call it just my server. This we can leave it as it is, and I can use something
called meta argument. Let me command to save this, and let's Google Terraform. Meta argument. And I don't need depends
on meta argument. I really need count
meta argument. But let's maybe click this one. And in the terraform
documentation, you can see many
different meta arguments. And the one that I'm interested
in is, as I said, count. Although we could use
actually for each as well, but I will use count this time. So what I want you to notice
as well, at the same time, you not always want to go
to terraform resource AWS. What you should really
do, you should go through that terraform
documentation and see some other things
that are possible to achieve in terraform but
are not AWS specific. Terraform provides
perfect documentation for every single bit, and it also gives
you the examples like here, create four. If I use count four, it will simply create four
similar EC two instances. Can it be easier than that? Count four, I don't
need four. I need 100. What I'm saying in my
code is count equals 100. That's all. 100 servers
will be created. But if we go back to this
count meta argument, you can also see
something like that. Count dot index.
What's that about? Index, it's also a programming
language specific thing. Every item simply that will
be created by Terraform, will have its index number. We can use that to have
different names for each of those servers that are
going to be created because otherwise they will
all have the same name. Maybe I will show you instead of trying to explain that
that will make sense then. I will copy that count index. I will go back here and I
will call it my server, not one but of course
I have 100 servers. I don't want them all to
be called my server one. I will use maybe dash and
then that count index, something like that, or maybe
even something like that. Doesn't really
matter. Let me just command S. Let's
see what happens. Let me clear. We do
Terraform apply. Before maybe we do, let's
go back to instances, and let's close all that crap. It's too many things. Now, let me refresh that. Just wanted to say or show you that we have no
instances running. I mean, we've got three, but
they are terminated now. We were playing in
previous video with those. So if I choose instance state equals running, I
should have none. Let's remove this filter. We've got three terminated. Let's go back to our code. Maybe I will make
it slightly larger, terraform apply and press
Enter. Let's see what happens. It says 103 to add. That sounds about right
because we have 100 instances, which means 100 virtual servers plus VPC plus two subnets. If we check what's here, VPC it's subnet two, subnet one, and then
we've got our instances. And this is this index
number I was talking about. For example, this one is 99. I just wanted to double
check it's two micro. Yes, that's fine. If we go
further up, we've got 98, et cetera, and
somebody will say, why it's 99 and not 100. I don't think I can scroll
up that far. No, I can't. But the thing is,
this index number starts counting from
zero by default. So zero is really
server number one, and 99 is server number 100. Yeah. Well, I could
change it do at plus one, but there is really
no point and it would start from one and
finish in 100. That's not my point
really. We see that it's going to
create 100 servers. So I just say yes, click Enter and they start
being created. We see that it's
random, like a number. It doesn't start from
zero, one, two, three, it simply tries to create them all at the same
time or in batches, like a bunch of servers first
and bunch of server next. Maybe if we go, it's completed. We can see 13 seconds, we should already have some. If I refresh quickly here, Alright, we can see some of them already up
and running here. So let's go back to our code. We've got 68 roughly 22
seconds 13-22 seconds, I can see the creation
time for each server, like here, 22 and 13. More and more of them created. So if I go back 28 already, I I refresh, 37 refresh again. We got more and more
servers refresh again. Oh, 99 is being created. That's the last one. All right. That wasn't successful,
which is funny because it says API error,
VCPO limit exceeded. You have requested more VCPU than your current
VCPUO limit of 32. I was not aware of
that, to be honest. I didn't know that there
is something like that. I used to be able to create 100 servers in the past.
That wasn't a problem. It says now contact Amazon. Easy to request to request
an adjustment to this limit. So it looks like we
have some limit. So how many servers
were actually created? Let's have a look.
Oh, it stopped at 37. Hmm. But you know what?
That wasn't the point. We are not going to request
some changes to the limits. My point was it's simply very easy to use those count
meta arguments or other meta arguments to create multiple things of the same
type or of similar type. We've got 37 instances, so that's good enough for me. I don't really need that
limit to be adjusted. So I just say, terraform
destroy, auto approve. Enter and we will destroy
what we have there. When you destroy, you will see that terraform
will destroy only, you can see now 37 to destroy. Terraform knows how many of
those servers were created, and it knows exactly
how many it has to destroy because it keeps all that information
in TF state file. We can see they now
have been destroyed. So if we go back and refresh,
we shouldn't have any. I mean, they've been terminated. So again, instance state, if we check for
instance state running, we shouldn't have any, and we don't have any. That's cool.
26. How to use Terraform with GCP, Azure and other providers: So now you have some
terraform knowledge. The question is, what
do you do about it? What do you do next? Yes? How do you learn even more
about terraform? The fact is terrafom
documentation is really all you ever need. Because if you go back, I mean, I could go with you through
every single example here, how to configure in IS,
interraform, et cetera. But what's the point?
You've got here all the documentation
you need and you even have the examples
ready for you. I can't see the
point of doing this. Secondly, I don't even
know if you want to work with AWS using Terraform. What if you, for example, want to use GCP, like Google Cloud
provider, I mean, how do you use
Terraform with GCP? Well, the answer is,
you simply go to Google and you say Terraform resource, GCP, Google Cloud provider, and then instance, for example. What do they call
data instances? It's Google Compute instance.
You just click on that. And you have the examples here, how to use Terraform
with Google services. We've got Google
compute instance. It's called default. You can change this name to
whatever you want. The machine type is slightly
different and to standard, but that's very
easy to figure out. You just have to check
on Google Cloud what exactly it is and which
instance type you need. You've got the zone,
you've got the pags. Everything is very similar to what we used to have in AWS. Basically, just refer
to the documentation, go through it and play with. And as I previously mentioned, don't rely only on what's available for
particular provider, check the Terraform documentation
to see what's available and what is not like we did with those count
meta arguments. Because this is basically
the documentation for entire Terraform and you
can see the resources, but you can see meta arguments. You can see modules. Modules are like
equivalence of functions in Hoshi Cor configuration
language where you create one module you can reuse it changing parameters
and attributes. But I will not go through this because this is not for
the beginners and you will only work
with that when you work with huge infrastructure. Then you might want
to use modules. But my point is from here, you can go anywhere you want, work with anything you want. You just need to use
this documentation, and it will tell you exactly how you use that
particular resource or that particular function
or utility within the era so remember that if you don't want to stop here
just on Terraform, you can learn many
other DevOps tools. If you just go to automation
avenue.com platform. You will find AWS training, you will find Python
programming training and many, many more information,
if you want to progress towards DevOps
or Cloud engineer, that's perfect place
for you to start. If you followed this
material and you played actually with
infrastructure in AWS, maybe you want to add maybe load balancer,
maybe target group, et cetera you want to add more and more elements to your infrastructure
and play with that. But all of the information
you need here is in the terraform documentation with examples that are ready for you. You just copy paste them and amend whatever
you want to amend. I hope that's helpful and
thank you for watching.