Transcripts
1. Course Introduction: My name is Dave Cowen. I have been in the
infrastructure and DevOps and software
development world for something
approaching 15 years. And I've worked for all kinds of interesting companies
pack or something I really believe in so much that I worked for the
company that makes it. I don't work on
packer by the way. I think it's small
and easy to learn. My promise to you what
this course is that I'm not going to
exhaustively like read slides that take
you slowly through each part of the read me
like the official docs. You can read the official docs yourself to fill things in. My pledge to you is that
I'm gonna give you a, an absolutely realistic
project that shows you the real world usage, kind of what you need
to be thinking about. And you'll be able
to intuitively feel the advantages
that it gives you. And I promised to
do that in a pretty small amount of time. I want to keep this course
short, sweet, practical. So again, if you're
looking for like the exhaustive course
that will give you everything and you'll probably
never end up watching it. This isn't it. If you want, like, just give me the thing that I need
to get started with this in a realistic
project that I can clone, copy, reuse. This is the course for you. So if that speaks to you, I hope to see you inside and
yeah, let's get started.
2. Why Packer? Why Infrastructure as Code (IaC)?: So for those of you who don't really know why
you would want Packer, let me just give you a very
quick value proposition, maybe not just for Packer because you already Googled
it and you're here. But for infrastructure
as code in general, which is kind of the beginning
of what Packer gives you. So before packer or
having something that turns like manual configuration
into a machine image. Let's think about
how we would solve, like, let's say, a
real life problem. Let's say there's some weird
security vulnerability. You hear about it on a Monday
and you realize, Oh no, our WordPress hosting server needs a change to the
web server config to disallow a specific version of TLS so that we're not
vulnerable, fine. So you would go into your
documentation, right? Because this is like your
manual documentation. You do everything manually. Everything is bad. You cry yourself to sleep every day after coming
home from this job. At your job, you would
have to basically figuring out where
where this file is, where in the instructions, okay, It's in the main
NGINX config file. Don't worry if you don't
know about this stuff. It's not really important
for this course. Just worry about the packer bit. But you'd have to go
in here manually. I mean, you're logging
into a production server. You're now messing around
with the configuration. Let's say it's like an
SSL protocol that you have to comment out. Let's say 1.2 is out, so you stick the comment
from here to here. And then now what? You have to restart NGINX. I guess you have to
hope that this is your only production server. If you have ten production
servers, yeah, kinda screwed. Nobody is happy. This is bad. This is a bad way
of doing things. There's no good way to test this in two months when you login to your
production server, there's no record of why the
NGINX config is this way. Maybe you forgot to change the actual
documentation you used. So new servers are still
getting set up incorrectly. We don't do things manually. That's not what we do. We're professionals. So packer is kind of the way that you take this from being totally
manual to being automated. And of course, yes, I'm going to show you all
the details in this course. You'll see it step-by-step. But the important thing
right now is kind of that overall process
changes from the sad, sad world where you're logging into production and changing things to this
much happier world where in get in your Packer project for
your WordPress servers, you go into config. You see there's an
engine X conf here, looks like the main config for
Engine X seems cool to me. Here you find your
SSL protocols line, you make the change here. And then instead of
logging into production, you simply build a
new packer build essentially you commit this. Presumably you're kind of
automation CI system picks up that change and get reruns
Packer to create a new image. And then bam, you have a new image to reference in the rest of your
infrastructure. And then you can do whatever
process you do to rollover your servers in production to these new servers that have
just this one thing changed. And you might think
to yourself, well, that seems like a lot of extra steps for really a
one-line change in one file. But as soon as you hit
the reality of what production operations are like and maybe having more than
one server like hundreds. This becomes the only sane
way to manage change. Instead of having manual change
where like people need to remember things, follow
specific instructions. Things can get forgotten, things can sort of disappear. You don't remember
who change something or that something was changed. In this world. Every change has a git commit
attached to it, right? So you change this file,
you have to commit it. You put the name
of the ticket or the reason you changed it
or whatever in that commit, that kicks off a new build. And then that is a
brand new Build ID, even though it's almost
the same machine. And if someone's like, Oh, why is something wrong
with this machine? Will they can see every config
change that went into it. Going back all the way
to the beginning of our first run of this, this package script that
created an image for us. So I hope that explains
some of the why Packer and specifically that it
gives you a feeling for how the workflow that
this enables is much, much, much better and
much more professional. Then the workflow that most people and even
companies start with, which is right, It's
like instructions. You Google a thing that's like login and do this manually. It's like no, no,
no, no, no, no. That's not what we do. It That's not what
professionals do. They didn't go in. They might
test something manually like you might do this locally
to just check if it works. But then you grab all of this, stick it into a
script or sticky, automated in some way, and then automate
around that again with infrastructure as code, making sure that every
change is happening in get, making sure that you can
look at an image and know exactly what is in there
without logging into it. That makes sense. Okay. So for those of you
who were not sure why accompanies use
packer, That is it. I would say that's
the main difference. It also enables all this
other stuff which we'll talk about throughout
the course. Which is really once you
kind of wrap something like this in
automation and you go from a manual to an
automated process, you can now integrate it into existing automation for
all kinds of other stuff. So now, you know, if your security department,
your company grows, your security department
comes to you and says So like we need
to automate IT, security testing for everything that makes
it into production. Well, you can say, yeah, well, we don't log into
production to do stuff like during the build. As soon as like packer finishes a build
in your CI system, after you make a commit, you can just run your leg security tool
and make sure that that image is up to
snuff for security. And then you can tag it with a security
approved or whatever. And then we know that's
okay to use in production. So it's basically
like no extra work for you as an operations
or infrastructure person. Anyway. So it's all about automation and that's where we're going and
that's why we do this stuff.
3. Project Packerfile Overview: Let's talk about the
first thing you do after cloning a packer project
or starting one. The first thing you're
always gonna do is run Packer in it. And then this directory, assuming you are in
the directory that has a packer file in it. Now because I've already
run this before, there's no change needed
and it won't do anything. But the very first time
that you run this, it will certainly
install some things, namely the plugins that you have required for your codebase. So without further ado, let's
go through how this works. I'm going to give you a
little tour through this. This source is the source
image we're going to use. That is, the image
type is going to be an Amazon EBS image. Been to. And the way that
we're going to find our source images
with this filter, where we look for something
that has this name pattern. So you can see we're
using 2204 currently the latest LTS or long-term
support version of Ubuntu. And this star is a
wildcard if you're familiar with regular
expressions at all, star means everything. And specifically
the most recent one that matches this
everything star. This will replace a date. So this will be like
the April image, like April whatever, 2022 in June, there'll
be a June image on non. This means that we don't
need to change our code every single time that
the image is updated, we're always going to be using the most recent version of 2204. Whenever you run this code. The owners filter, that
is the canonical user. You're guaranteed to
get an official image, not some like evil hackers
like pre rooted image. The SSH username
basically just says, okay, when we're
configuring this, dealing with this
image of B12 is the default username
that we're going to use to connect with SSH. And the temporary key pair
that I want Packer to create. One of the things
Packer does is create a key pair just
for my packer run. It's gonna be using this curve. If you were using
SSH dash key gen, that tool to create
a new SSH key. That's literally
what this is doing. So this is basically
what it's filling in, the dash T part. So this is basically
like running SSH key gin type dd to 19. That makes sense. Okay, so let's get to the actual
build instructions here. By the way, I'm not going to, this is not a complete
explanation of Packer. This isn't absolutely
like in the trenches. Practical street
fighting Packer. There's, there's more
to Packer than this. I'm just kinda showing
you the minimum you need to know to get a
working Amazon image here. So what are we going to do
when we build this image? Well, we're gonna tell Packer that we're going to
give this thing a name. We are going to provision it with a series
of provisioners. You can think of this almost as like the steps to our Build. We're going to feed
it this first script, so this will get uploaded to the machine and
then executed on it. Then we're going to add
some config files here. So these are essentially
file copy operations. This is all happening over SSH, where it's going to
look for something starting in this
directory config, you can see that's here,
WordPress NGINX config. Well that's this file that's
going to get copied to the machine in the BUN to users. Because remember, we're
using SSH username ubuntu. That's what we're going
to log in as here. The first step is kinda
depositing all of this stuff in the urban two user's home
directory because that's actually where we can
have right access. Without making this
ugly and complicated. We do this in two steps. Shuttle all the files up
there to the a bunch of user. And at the end, we move all of those over to where they're
going to be with sudo. So now that we have the file written in his home directory, we're going to actually write it to its final destination, which is that's the Engine
X and Gen X.com and so on. The final thing
we're going to do, much like in the course. It's like we edit all
these config files, install all these services, do all this other stuff with
the base config script. Finally, I separated this into a separate scripts just because it's a
nicer way to do it. We can trust that we have
this whole platform setup. And just like the
last section seven, section seven of the course, this script mirrors
that were actually just setting up the
WordPress application, the WordPress specific
config files, et cetera. So that's the end
of this section. In the next section, I'm
going to quickly walk you through what's actually
in all of this. Let's actually look at the code.
4. Packer Primitives and Terminology: Let's talk about the
essential Packer primitives and terminology that you
actually need to know to use it. So these primitives are building blocks that make up
Packer are the ones that you've got to understand
to know how Packer works and really how it wants you to think about
image building. I'm covering this in
the opposite order that the documentation does because I think it's actually easier
to understand this way. We're gonna be
going from the more general to the more specific. The terminology page in the docs is the one that
we're talking about here. So when you look at a
Packer template file and that is the first
bit of terminology here. This kinda describes
the entire process. So this is the one place where
it all comes together with all these different keywords,
provision or source. You might see data source
or post-processor build. You kinda wanna
know what this is. So I'm going to
explain it all in a crazy paragraph and then we'll kinda I'm going to explain it all in a crazy paragraph
that describes this. And then we'll go through
each term piece-by-piece. So when we run this R
command tells Packer to read and execute the
instructions that it finds in a Packer template which has that file
we just looked at. That template contains
one or more builders which know how to build an image on some
platform like AWS. Packer finds the build
that we defined in there, which is a task that
creates an image. That build is really
just a nice name wrapped around a bunch of provisioners which do
the actual hard work of configuring your image. If all of that succeeds, you end up with
an artifact which is usually like a machine
image of some kind, like an AMI on Amazon. That's the core process. There's an optional part at each end of
that core process, which is if you want to change
stuff at the beginning, like if you want to
bring in preexisting outside data for any
part of this process. Like an existing machine image
that you've already built or secrets that you've got stored in your
Cloud Secret Store. You can do that and you use data sources to
accomplish that task. If you want to modify
stuff at the end, when you've already
built an artifact, you use what's called post
processors to do that. So let's define each of
those primitives in depth. Templates are essentially
Packer config file in HCl or Jason, which defines one or
more builds by tying together all the other
Packer primitives. It's the central place for
Packer config to happen. And you can see that in
our template, the builds, those curly braces that start on line 27 where
we define the build. They wrap everything, make totally encompass everything
else that we're doing. All these other
primitives in here, they're all inside of a build command is a
sub-command for packer, which tells it to do
stuff like Packer in it. Like please download all my external
dependencies and plug-ins. Packer, funct FMT
right format my code, Packer, inspect Packer,
validate Packer console. And everybody's favorite, the
one we're gonna be using, which is packer build, which is just, please build an image based
on this template. A builder is a packer component, either built-in or plugin. It's mostly going to be a
plugin that knows how to create images on a
specific platform. So like a Google Cloud image and Azure image at
Digital Ocean image, Amazon, which we're using. A builder is what wraps
up all that kind of API calling complexity and code that's specific
to each platform. Like you make very
different API calls to Google Cloud than
you do to Amazon, even though you're kind
of doing the same thing. So the builders
are the modules or plugins that wrap all
that complexity up. So that all you
need to worry about is what's the builder
called that I need? And what parameters
does it take? A DataSource is
an optional thing that fetches data
that you might need. This is one of the things
that you would run at the defined at the beginning. This could be an
alternate base AMI, that you want to use machine
image that you want to use. A secret that's stored in your clouds Secret Manager
if that kind of thing. And then we come to the build. And that is that single task
that produces an artifact. And as you saw before, it, it wraps everything. Kind of consequence down
here that is like doing procedural stuff and modifying things and building are
in configuring our image. An interesting fact
is that you can have more than one of these
running in parallel. So that's one thing
that you can do. If you are producing images
for multiple clouds, maybe. Then you could have a
build task for each one, a provision or is what you
saw inside of that build. It's the primitive that
you actually use to make configuration changes
to your images. Builds take some kind of source. They apply provisioners to
that source to modify it. This is things like
copying files, installing packages, mutating the state
into something else. And then Packer creates
an artifact from that. The image I keep saying image or artifact
interchangeably. The artifact is just what
your build produces. And that's simply,
in our case, an AMI. It could be a Docker
container image, it can be a VMware image. It could be so many
different things. And we will go through
all the builders that produce different
types of artifacts. Well, we won't go
through all them. I'll show you, I'll show you the ones that I use the most. You can kinda take
it from there. The post-processor or the other thing that
we talked about, which are things that you can do stuff with after an
artifact has been created. I don't use a ton of these
usually in real life, but they really
make it easy to fit into any process that you
or your company have. Like compressing, creating a compressed
version of your image, or like creating a manifest file that tracks each build basically at the time of
each build or whatever, can also be things
like uploading your image to somewhere
that like you need it. So to review for this video, I'm going to read
that crazy sentence I read before again. And now you will know exactly
what everything means. Again, you run Packer
with a command, which tells Packer, in the
case of the build command, to read and execute the instructions in
a Packer template. That template contains one or more builders
which know how to build an image on some
platform like AWS. Let's switch over to
the code for even greater effect
emotionally on you. Here, Packer finds the builds that we defined in
that template image, which is a task that creates an image that build is really just a nice name wrapped around a bunch of provisioners
like you see here, which do the actual hard work
of configuring your image. If all of that succeeds, you end up with an artifact. That's the core process.
5. Packer Template Blocks: Let's talk about the things in this config file that are not covered by the primitives
that I just explained. The first is this Packer block, Second is the source block, and then the third is
the building block, which I've already
kind of alluded to because it relates to
the build primitive. You'll notice me
talking about blocks. That is one of the ways
that packer works. It consumes this template
file by looking for these blocks that we've
defined in HCl or json. The packer block
contains Packer settings including a version number, and your required plug-ins. You're almost always
going to like, these are almost always
gonna be the builders that you are using. In our case, it's
the Amazon plugin. And we define a specific
version and kind of where that is in case it's outside of, I should say, HashiCorp
hosted place. So like if it's an
open-source thing on GitHub, that's where it's going to be. The source block has
this interesting syntax that you might remember from Terraform if
you've used it before. But it's basically the
keyword which is a source. Then it has the builder type, in our case amazon EBS, and a name for it, which is going to be local
to this Packer template. So this is, it knows
what to look up here and what shape this is going to conform
to more or less. This is what we're
going to reference it as in this build. Each builder you can think
of a function in programming like different builders
require different parameters. Essentially Packer calls them
configuration attributes, that's all this stuff inside. So like Amazon, EBS
requires some of these things and
others are optional. It's gonna be different
if this is GCP builder or a digital ocean builder or
a Docker container builder. So the configuration attributes or just the stuff that
are optional required, the information you can pass in to kind of get the exact source, source image or whatever
you want to work with during your build. Then finally, this is all
passed into the bill block. But the specific part of that that I want you
to focus on is this. You can see this
religious ties together, the naming convention
above with periods. So we're saying, okay, well now we want to do a build. We're gonna give that
a nice little name. And the sources, we're
going to apply this bills to our source, amazon EBS, a boon to and if
we change the name of this here and we call this leg a one to one because we have 27
different a boon to like, I don't know, LTS versions
that we're applying this to. That's fine. We'll just have to change
it down here as well. Okay, so now you actually, at a high level understand
every single block from a configuration perspective
that's in this config file. And you also understand
the primitives. So really just based on that, you should be able at
this point to read almost any Packer template
file that you come across. Now, occasionally people do some pretty freaky things and they're Packer template files, but it's like these are the primitives that
packer knows about. And so it's all going
to come back to these.
6. WordPress Project Code Tour: In this video, I'm gonna
give you a very quick tour of the projects starting state. So this is kind of
like what I'm giving you to start with
for this project. You'll find it here.
It's open source, it's from a previous
course of mine. And the thing I've already said, I'm going to say it again
is you don't need to understand every part of this. This is not like a Linux course, it's not a WordPress course, it's not a web hosting
on Unix systems course. But I just want you to
see the raw kind of manual stuff that you're gonna be automating in
this course with me. So I'm gonna kinda take you
through it at a high level. I'm just kinda tell you
what everything is doing. If you're curious and
you want to learn more, then of course dig in. But I'm just gonna kinda give you like here, I'm going
to tell you, alright, we're updating the
package repository that we have on B12. And we're going to
install some packages, all the stuff we
need for WordPress, database, MySQL, in this case, it's actually MariaDB,
but it doesn't matter. A Web Server Engine X, a language kind of VM Runtime, PHP we're using, and an extra package that helps to
integrate with the database. A little monitoring
tool, not important. Then I show a little bit of
manual stuff about services. It's not so important that
we start them necessarily, but we are going to be enabling these because when
the machine comes up, we want it like ready to host a website without
any other manual steps. So some services
have to get enabled. That's cool. It needs configuration. So there's a part of this other course where
we create a config file. And this is the main web
server configuration. Just stored as a file is very
traditional for Linux here, just plain text files. We do the same exact
thing for PHP, so for the language runtime, and you can see that here, a little bit more stuff. All these commands are
really just like moving files around, editing files. The last thing we're
doing is creating a website user that the
website is going to run as that enables my students from the other course to actually run multiple sites on
a single server, which was be the normal
way that you do this. It's like creating a user, creating a random password, kind of setting
things up securely. Then the meat of this project really is going to be
in this file here. And that is the actual
process of setting up the WordPress application
now that this kind of hosting platform has
been configured. So this is kinda the
application that's running on top of everything
that you just saw. You can see where I've
written docs for this. You can read through this
in depth if you'd like. But we're going in
creating a system user, what kind of hooking
everything up, creating a directory for
this website to go into. We are creating a specific
web server config file just for this site that goes
beyond the basic config. And then we do some more
assisted many like things. Move a file, delete a thing, add something, come up
with a random string. For a password. Set, run some commands
in a SQL shell. Download a thing like delete
another thing or, you know, decompress and
unarchive something, restart and mess around
with some services, change a file, that
kind of thing. These are all things
that packer can do for you automatically or that you can do with a shell
script or anything else. There's nothing here that's
not like automatable. So that is the high level
view in case you're curious of what this
project you're kind of like pasteurizing is actually doing underneath the covers. Now again, for 20th time, I just wanted to be absolutely
sure that, you know, this. Don't feel weird if you
don't fully understand everything I just showed
you, you don't have to. One of the wonderful
and amazing things about Packer, really, just about the automation
that we're doing in this course is that that
is one of the benefits. It's that once something is wrapped in a
layer of automation, not everyone who uses that
automation or kind of consumes what we're
wrapping up needs to fully understand
what it is wrapping. In this case, all this
manual stuff and all these config files and every
directive and every file, you don't need to understand it. All you need to
understand about is that it's a file and it
goes somewhere. And if it goes somewhere, and then everything's
configured correctly, this thing is just
going to work. That's all you need
to know about it. Of course, if you're
curious and you want to dig in,
That's wonderful. But I think that's one
of the great advantages of what you're doing in
this course is you see that you don't need to
fully grasp the details of every single thing
because you're just building an automation layer, which is kind of an
abstraction layer around this. Okay? So you are very, very well I can understand the packer thing that
we are doing to this, but you don't need to
understand this perfectly. Okay, so I hope that has, uh, laid your fears and
made you feel better about not fully grasping
every single command in here. And if you do, great, It's not a particularly
complicated setup we have here, but
it's very common. You'll see this is, this is
the kind of stuff that is running production all over
the world in real life. So it's very, it's
very much like what you will see if
you're using Packer.
7. Packerized Project Code Tour: So let's jump into the code. If we go down to the
build section here, let's just kinda go through it. The first thing that happens is there's a shell provision or which runs the scripts
directory AMI config script. Okay, Well let's go to
the scripts directory, AMI config scripts. Let's look at that.
You might remember some of these things from
the Bash scripting section. But I basically just taken the core parts of the hosting platform
setup like I do in the course where I'm
walking you through and kind of explaining
Linux at this point. Well now you already
know how Linux works and we can just kinda
do this all in a script. So I've really just transposed all of those
commands into a script. This is a bash script and
you can see install my, my SQL Server Engine X PHP, MySQL, PHP, npm, start and enable those services so that
they start at boot. In fact, I don't think actually need to start them
because I'm only, I'm going to image this machine, but it doesn't really matter. Then I install a bunch
of PHP extensions. These are the ones
you need for 2204. These are some extensions
that I sort of have dangling, pending
further research. But things work without
them. It's totally fine. These don't exist in 2204
or have been subsumed. Another package names, we do some NGINX config like so some of the stuff our
NGINX config depends on, like creating this
directory, etc. We do standardize on something slightly older
which is run PHP F P, M. I may make a commit
to change that. You don't need to worry
about it too much. Cool. So that's the
end of the script. So you can see that
that pretty much brings us to the end of
managing services. So we'd start installed, started and enabled the
services that we'll need. Let's look at what the
next provisioners do. So you can see now
we're moving files. This is the main
engine x config file, which you're surely familiar
with by this point. We're going to, this is like the WordPress site config file, which is going to kind
of front the traffic for your application
server, your PHP server. The PHP FEM config for
your site comes over here. Let's just have a
quick look at those. So that's the main
engine x config. You're very much
familiar with this. This is that makes your
P, that's why we do it. Create that cache directory. The site config file
for Engine X here, you're totally,
should be totally used to this from the course. Now, in the instructions I have, I have this like as a
shell variable and you can change that as you, as
you work with this stuff. Obviously, if you're setting
this up for yourself, you would simply clone
this and then solve out tutorial Linux for whatever domain you're going
to set up the site on, whatever user you're
going to use, et cetera. So like if you literally
just find and replace in the package directory
tutorial Linux for whatever your site
name is or domain name is, then you should be pretty happy. Doesn't take much. Same with PHP SPM. This actually doesn't care
about the domain at all. Just your user's home
directory, that kind of thing. We move these into place,
pretty much self-explanatory. And then our final provision or actually installs WordPress. And you'll see setup
WordPress site. That's like this whole process
that we're doing here in the original GitHub
instructions that I take you through
in the course video. Lovely is a bunch of inline
config file as well. Now we're going to
do, these are some of the config files we just wrote that we were
just talking about. Now we're going to do all
of that simply in a script. And that script is the
WordPress site setup script. And this is sort of
an adapted version of what I'm doing in
that markdown file. There's a few kind of problems that I had
to solve, which is, how do we get setting you're in my SQL Pass is an
interactive thing. So I simply echo that out so that you see it
during the build process. You can copy it during
the build process. This is not perfect. Like if you were doing
this professionally, you probably wouldn't
want this like in your build system
logs or whatever. But like for this course,
It's completely fine. If you're doing
this for yourself. For a small business,
you're not going to keep these logs around forever. They're going to literally
be echoed into a shell. Something like this,
where you'll see a packer setup script is logging something you were
my SQL pass is this, you know, not, not
a huge problem. You're not like
saving that forever. So you stick that in your password manager and then you're pretty
much good to go. So other than that, this really follows
the instructions very, very, very, very closely. Over simply downloading the WordPress application
on zipping it, on archiving it,
decompressing it, and settings and permissions on that home directory and exiting. And at that point,
pretty much everything should be set up. What Packer than does is when it gets to the last thing
in the last build step, it's like, okay, cool. If this thing is
still responsive, then I'm going to assume
it was successful. And I'm going to
image this machine. And what it'll do is
it will build an AMI, an Amazon Machine Image. Wherever the state
of this machine is.
8. Installing Packer: So let's talk about
installing Packer. You can describe a
binary if you'd like. And I think that's totally fine. And that's from the
downloads page. Just grabbed the
latest version for whichever platform you're on. And by the way, can I
just say I loved that these guys are here,
just makes me happy. If you're on OSX, you can use Homebrew
with these two commands. On Windows, you've
got binaries here. On Linux, you can add
the HashiCorp releases, repo, and then just install that actually all of
the hace tools from there. I just wanted to show
the chocolatey option on Windows is probably
the coolest one. If you use the chocolatey
package manager because you want to be part of the cool feature package
management world that's been happening
everywhere else for a good long while. Chocolate is probably
what you want. And as you can see, you can. It's maintained by third party, but it's still possible
to install this way. So for myself, I have
already installed Packer and you can just see that in any Unix-like
environment, you're probably gonna be able
to run which, which Packer. And that'll show you
where it's installed. And you can see that I
installed this via Homebrew. If you're downloading the
binary on a Unix like system, then you'll probably want to
stick it into user local. And that'll take a pseudo. But that's pretty much
all you gotta do. And then you've got
Packer installed and we can go from there. I'll
see you in the next video.
9. Creating an SSH Key in AWS EC2: Alright, let's quickly talk
about creating a key pair. If you don't have
one already, again, if you have one that you can
use an SSH key pair for EC2, then you can skip this video. But for those that
don't, you can on the main dashboard either click key pairs here are really
from anywhere in EC2, you can scroll down to network
and security on the left, click key pairs and then
create a new key pair. I'm going to name
mine 2022, Cohen. We're going to make it in EDA to 5519 key in the PEM format
and create that key pair. And you can see this just
auto downloads the file. Let's do a long listing of
our Downloads directory with our key 2022 d Cohen. You're gonna notice one thing which gets a lot of
new users confused, which is that these
permissions 644, allow this to be world readable, which actually SSH will check n, not like that at all. And simply not let you connect. A lot of, a lot of newbies get hung up here and
that's totally okay. It's like a weird thing. They get created like this. But all we're gonna do
is basically here's our SSH directory with our
SSH stuff just one key pair. So we're going to set the
permissions properly and then simply move it into
the SSH directory. We're going to say CH,
mine, just give it 600. Downloads. 2022, dash d Cohen. Now you can see
we've removed all of these group and other
permissions in threes. Sorry, I'm not highlighting
that correctly. So this is other anyone on
the system, this is group. So the owner's group can't read and anybody else on the system
can't read either. It's just the owner
that has read write, and the owner is Dave. Now we can actually
move that file. I mean, you could have
moved it before too, but into our SSH
directory will list the SSH directory
with a long listing and that'll just show you that these are all
nice and clean. They have the same permissions. Now, last thing I'll
show you with keys and I think this is just a
wonderful convenience feature. Is SSH ad is your SSH agent, like it's a demon that runs. And what you can do
is basically when you connect to your EC2
instances or whatever, especially if you
have a lot of keys in your SSH directory, it'll just try the first
three that it finds. And then you might
be all out of tries. So people start doing
like SSH, dash, I, like some key, and then do
at a distance or whatever. That's a very common
thing to see. One way to avoid that is to simply add the one you want to use to your SSH agent. And the way you do that is
with SSH 2022 d. Cohen. If there's a password
on it, it'll ask you for that password once and then keep it
decrypted in memory. And what's really nice
about that is that then, then if you're using that across all of your
public IP instances, you can now connect to all of those without being
re-prompt it for your password each
time or like for each command or stuff like that, show you add it again, SSH listed in there. And if you want to clear it out, you can delete it. Cool. So that's sort
of the lifecycle. When you reboot your
machine, SSH add will be empty again
when you restart it. So this is like a once
per boot thing to do. It's pretty red. There you go. That's kinda what you
need to know about keys with Amazon. And I'll see you
in the next one.
10. Creating an AWS IAM User for Packer: Alright, in this video, we are going to take
a look at creating programmatic access
for Packer in Amazon. The way we do that is by
using something called IAM. It's a whole separate
like dashboard and service in Amazon. Identity and Access Management. What we're gonna
do there is make a user that has API access, programmatic access,
they call it. And what that sort of spits
out for you is really just a key and a secret. That's the sort of
the automated login the packers gonna use
to identify like, hey, I'm I'm allowed to do this on
Dave's or your AWS account. And then packers gonna use those to spin up your instance, create a security group so that it can connect
to that instance, create an SSH key so that it
can log into that instance. It needs, it needs permission
to actually bake an AMI, to create an AMI from
a running instance. All these permissions,
because this is a packer course and
not an AWS course. A full run AWS and production costs would
be like 600 hours. I would require you to do a three-month
internship with me. Before I call it done. This
is primarily a packer course, so we're going to do this
in a simplified way. What that means is we're
just going to give this account
administrator access, which is like for production
purposes, way too much. It doesn't need all
that. In real life, you would cut this down
to something that is just the stuff the packer needs. Alright, that being said, let's jump in and just
make a little login. In services. Let's go to IAM. Let's look at users
and add a user. Now what we really
want is API access. So we'll call this d Cohen May, and we want programmatic access. So this is gonna be API access. We don't need this to
be a new console user that can login and use
this GUI that we're using. It's not what this is for. Name it, and make sure it's an access key for
programmatic access. We're going to do
this in a maybe, maybe the most getaway which has just tag administrator access. This is not something you
want to do in production. In production, you want to, for example, for Packer, you'd want to create
a packer role that simply has only the
permissions that needs. So like EC2 dot star if you
want to be pretty rough, but more like create instance, like create AMI, create key, create an EC2 key,
delete EC2 key. You see what I'm
saying? It's like I'm showing you the general
outline of how this works. So we're just going
to use admin access, which kind of gives
you everything. But again, not a good idea
for anything production. This is the CH mod
777 of of AWS. Now you're only going to see
the secret access key once. You're going to want to copy
this and save it somewhere, I'm saving this in a buffer and my editor, the
access key ID. Show this. It's okay for you to
see this because I'm going to delete
this key before I ever even think about
uploading these videos. But again, this goes like in a password manager immediately if you're going to use it
for any length of time. Okay. So now we've got all the
access that we need set up. Can I just say this is a
bizarre spelling, grammar, grammar mistake to have the user username
have been created. It's so strange. It should be has who's,
who's writing this stuff? Amazon, don't you guys
make $70 trillion a year? You can't spell check this.
11. The Packer Build: Okay, so you've
installed packer, you've created an SSH key in your AWS console that
you can use with this, you have created an IAM account
so that you actually have programmatic access
for Packer in AWS. And now you just want to like run this and see what happens. I think that is a
great way to learn. Let's run this and
see what happens. Here are the steps you
gotta do to get there. You have done step one. And you can use the demo project here without writing
anything just to see what the process is like before you embark on your own
project where you maybe convert something
that you want to work on to a packer project. So you have API access. This is just the manual steps
that we saw in that video. Here's the moment of truth. You have to now export
this in your shell. So I've got these keys here. You're welcome to
try using them, but they won't work because
I'm deactivating them right after this video is made. I'm going to copy those and
export them in a shell. Now if you're not super familiar
with Linux, that's okay. I'm basically just in my pack or directory here, the
project directory. And I am going to literally
paste these two lines. And what that means is that
in my shell environment, these environment variables
are gonna get set equal to my actual access
key, ID and secret. And what that means is
that when Packer runs, it will automatically look in my shells environment to see if these things are set to something
that isn't empty. That is just to
see if they exist. And then it'll try to use
those to login to Amazon and do all the packer
magic that it does. So now that that's done, we can run one of the packer
commands, which is init, which will go download
any required plug-ins for builders that it needs like
the Amazon EBS builder. And just initialize Packer. And then we'll
continue from there. Packer is done initializing
and that's just because I have developed
on this machine already. So like these, these things
are already installed. But on your screen you
may see like oat going to download the Amazon builder,
that kind of thing. I'm gonna do a video on
this a little bit later, but I've left in
some small changes that packer format would make. Nothing functional or large. But these commands
are cool to run, so we can try Packer validate. I'll show you pack fun to later. But you can see that basically
Packer thumped just looks for formatting issues that
are just like convention. Go laying it. Hcl conventions and Packer validate actually looks
to see if you've got all the parts that
packer is going to look for in your template. So now we can go
ahead and build this. And the way that we
do that is sitting in the directory that
this file is in. Of course, we can give
Packer the build command and presuming again
that we've exported these shell variables
like you have a valid, essentially login here, set
of access keys from Amazon. Then this will work. Now if you are on your doing your own project after this
and you're doing it in Azure, let's say that's fine. It's just, you're going to need the Azure builder
is going to require a different set of credentials there and your shell environment that
it's going to look for. Some providers
require a config file that's around or that
you tell it where it is. We're just talking
about AWS here. So let's run packer,
build WordPress, AWS Ubuntu, and
see what happens. It's going through
the whole process. You can see it's creating a
temporary key pair there. And that's actually
a separate key pair, not the one we created. So this is actually just
for Packer to login and run provisioners with the key pair you created before
is gonna be used. That instance is gonna be
tagged with that key pair. It's gonna get
uploaded and you'll be able to use it
for an SSH log n. So this is going to take awhile. And you can see that
in the meantime, we can see an instance
that has been spun up. It's now running. It looks like it's still
in the initializing state, but it's actually already connected because
you can see that we're but running a
package up an update. So this is like
apt-get update and apt-get upgrade
from our scripts. And it's looking kinda like
what we expect to see. Just if you're curious, Packer creates all kinds
of other resources just as part of the Amazon EBS builder. So it's like definitely
hitting the Amazon API. It's creating an instance. It's creating a custom
one-off security group just for this packer build. And this is all
just going to crank along and I will leave it here. I might speed this up
because it'll take awhile. You'll just watch this Build go. You can see here's a
couple of steps right, uploading that engine X files. So this is another provision or uploading that
NGINX config file, and this is just into
the B12 home directory. For now. It's run the packer
setup script. All this stuff has happened. It pops out a My Sequel, a randomly generated
MYSQL password. We can keep that. There's throw that into keys. And now we're simply
baking an AMI. What that means is we
can look at AMIs and we can see that this is an
AMI that is pending. It's being baked
right now off of our instance or
instance gets stopped. It creates an AMI from it. When that AMI is
actually complete, our instance will
get terminated. Alright, and you can see we are finished and
all that it's done as soon as it sees that the AMI is no longer pending but ready. It knows that it can clean up everything that it
created for this run, including the security
group that we saw before, the key pair that you sought created at the very beginning. And most of this time really is. I mean, you saw it was
just AMI creation. So now if I refresh this, the status should be available. The instance has
been cleaned up.
12. Using the AMI to Host a WordPress Site: Now that we have
created our image, you can see that we've
finished up here. This thing is echoed
out the MYSQL password. It is creating an AMI here at echoes out the name
that it's been given. Once the image is done, it terminates the
source instance. So the original instance
at temporarily spun up to configure or run out
of luck scripts against. That's been broken down again. The security group that it
used to let itself access the instance with the temporary
key pair that it created. All of that stuff
has gotten two. So this is a nice clean process. And you can see all we're
left is with them this image. Now, you could, if you were
using Terraform or something, you can now reference this image not by a MID necessarily, but by the source where it's like this format basically
like account ID, name. If you told Packer to name
this something like dash date, It's a nice way
to organize them, but just like we saw in this pack or configure
would do the same thing and terraform using a data
source where you just say, okay, the owner is gonna
be me, my account. The name is going
to be whatever I named this most recent
is gonna be true. That's another way to get
the most recent build basically or production build. If you were using
something like Terraform, it's a little bit beyond
the scope of this course, but just saying, what is
the process actually look like if we launch an
instance from this? Well, since this is, we'll just call this
tutorial Linux web. And you can see it's
using this AMI. You can get to this from
the new instance menu to select the AMI here. Pick whatever thing you want
to say I'm going to host. I don't know. I'm gonna host
medium-sized website. A couple, I don't know,
a couple of 100 users. Maybe it's a store,
needs some memories. Maybe I'll say two
or four vCPUs and maybe eight gigs of memory
for a larger WordPress site, you'd select just a keyboard
key pair that you have. Https, HTP, it's gonna
be a web server. Allow, you know, maybe just from your IP or just from
your private network, from the VPC, whatever. And I would actually go GPT-3. And maybe, I don't know, call it for a larger
WordPress site. You might have a lot of images are like user uploaded data, whatever its disk
space is cheap. So 5000 gigs, this
stuff is like, you know, that's not really
what you're paying for it. So yeah, that seems cool. We'll launch this
instance up and I will show you when
we login to it. So this is now running. It's probably still
booting up, but I'm gonna go to networking and copy the public IP address. I'm going to SSH have
been to add this address. I'm assuming. Yeah. Okay. My SSH agent already
had the keys. It's just like if you
basically we're using your default SSH key
on this machine, and that was your SSH key pair that this machine is coming
up with this instance, then that will just work. Otherwise you might have
to do something like this. You can see that this is just a sample key that I created. It's a PIM key that I created
in AWS and downloaded. If you're confused about how the keys and the
rules and everything, how all that stuff works. Then you're going
to want to watch my previous Amazon video for how to do this on
Amazon in general, because if you can't do it
manually on Amazon yet, it's gonna be tough to do
this in an automated way to kind of everything's going
to look like like magic. Anyway. You can see that
we're now sitting on this host, private IP. Yeah, you can do something
like an ax is running. I know it's because the still criminal to make this
part of the service name. I think. You can see. So just as this boot it up, you can see the PHP f Pm pool named tutorial Linux
is configured. So this is basically working. We can see the WordPress
site if we edited ETC. Hosts to like, actually, I don't think it'll
work because tutorial Linux.com uses hs ts, so it will insist on HTTPS
anyway, long story short. This would now be a
literally just a fresh WordPress install running all
the things that we set up, kind of ready to
configure and go. So this is really the, the,
all the manual commands are running through throughout the course
when you're learning it. With one beautiful, sweet
WordPress instance, ready for you to set up your site on while it's
running a WordPress server, but for you to actually
configure the WordPress site on. And yeah, it's a nice way of
just packaging this upright. And anytime you like, I don't know, you don't want
to deal with this anymore. You just terminate the instance
and everything goes away. Obviously, once you have sites
that are set up on this, you want to be like snapshotting the instance and not just
terminating it, right? Because that's your
sites running on it. But yeah, it's a nice kind
of development workflow. And the fact that you can just quickly use Packer
like that within a few 100 lines to including
all the setup scripts, to have a cool little image that is just kind of
always available for you. I keep this around
just for testing. I keep it around to verify if someone like
reports that there's a bug. Usually it's not a bug and it's just like they got confused about some
of the instructions. So I will just basically use the latest image or whatever. I'll just change
the source image to be whatever they're using. Kinda see if it's just one, just an image problem or usually
it's like a big problem. So I hope that's fun. And I hope that
gives you some idea of one of the ways that
you can extend this, make it bigger, make it more
professional, automated? Yeah. If you know things
like Terraform or CloudFormation or you've got some DevOps workflow at work. Like, I hope this helps you
see how you go from a manual. Like, Oh, I'm just scripting
this out while I'm figuring it out and working
on it command by command. How you go from that to a repeatable build like
through Packer to a piece of a larger pipeline where
all you need to reference is an AMI ID are really the latest image that's
named a certain thing. If that's, it's a nice way of
abstracting all this away. So I hope that's useful
and I'll see you in the next one piece.
13. Packer Documentation and Development Workflow: Let's talk about how to
navigate the packer websites, specifically the
documentation, so that you can teach yourself to
fish after this course. Now I've showed you
a very specific kind of introduction and it's
specifically about AWS. It is very specific to
well, not so specific, but it's specific to the
project that we did, which is this WordPress hosting thing, what you need for them. There's lots of other things
that you can do with Packer. The general shape, always
going to be like this. And that's what I
really like about HashiCorp tools is that they are sharp tools the way that a lot of Linux and Unix
tools are sharp tools. They do one thing, they
do it really well. They're flexible
so you can kind of time together with other
tools however you need to. And I think on the
HashiCorp Lauren site, It's a good place to look at some more real life
usage for Packer. So we've covered a lot of
the stuff in the AWS track. But if you want to
build Docker images like Packer can do that. If you want to build
on a different Cloud, GCP, Azure, that kind of thing. You might want to do that using Packer with
other tools, et cetera. But you can browse
that on your own. What I'm going to show you
is Packard io slash docs. This is, I think the, the place that I
spend time when I'm actually just building
a packer project. Everything you're
going to need is here. These docs are fantastic. Like how do you want to
connect to your machine? Data sources? You're going to spend a lot
of time in provisioners. I just want you to know
that this stuff is here. And like when you are lost, like let's say, Oh, you
forgot how to transfer files. Something we do in this
course a few times. Well, just go to
provisioners file and you'll get some examples. Now we're using HCL just
because it's a lot cleaner. But Jason, you might want that for compatibility
with something else, some other part of your
pipeline or whatever. This gives you an example of the actual usage and
it documents required. An optional parameters,
explains everything in detail, gives lots of examples of
specific things of use cases. I just want you to
know. I'm giving very succinct explanations so that you understand how a thing works in the context of this project that
we're doing here. But there's obviously
more to the docks and you're not spending your time to hear me read the docs to you. I just want you
to know these are here and that you
should be using this. It's completely normal for
your workflow while you're building your own
Packer project to be, have this open in a tab, and then have your Packer
project open in another tab. And that's kinda how
you're working on it. You're like, alright, so I
looked up the file thing. Well now I can write my
file provision or here, and there you go. Based on, based on whatever
you learned from the docs. That's something we'll
be using far beyond the material in this course. And I hope that kinda
gets you started for the future use that
you're going to have when I'm not here to hold
your hand through a project.
14. Useful Packer Plugins: Let's talk about Packer plugins. Until now you have used Packer plugins in this
project that are builders, but there are other
types of plug-ins like data sources and
commonly post processors. And the idea is that a plugin is really a
way of encapsulating a separate mini application
that knows how to do a thing. Usually communicate with
a platform to create an image is do the thing. So in the case of the
Amazon example project, in the case of the Amazon
plug-in and builder. So what you see in that, that source lock in
our code that uses the Amazon EBS builder and that's given to you
by the Amazon plug-in. And why don't we
just look at that. And then we'll also look at
some other ones that I think are interesting or common, popular and that you'll either
come across want to use, and they'll just
kinda give you an idea for how this works. So I'm on the kind of plugins, I don't know docs directory
site on Packer.io. And if we look at
just the overview of builders in the
Amazon EC2 plugin, you can see that this is
the one we're using, right? So from this plugin, this
is the one we're using, but we have other things
available to us like we could just have like
an instance store, AMI, this usage, each of these I'll show
you the format just wants, which is like, it'll give you an example of how to use it. It'll describe things
and then it gets down into the kinds of arguments
that this expects to get, environment variables that it looks for, that kind of thing. So for Amazon, I think the
interesting things to look at, obviously the EBS builder,
which we're using. Amazon import for a
post-processor is another thing. You kind of get for
free with this. If you're maybe using a different build or
to produce something that produces an OVA. You can bring this
into Amazon by having this post-processor
convert it to an AMI definitely comes
in handy sometimes. Secrets Manager, I think one of those things that goes when
you go from our example, our simple example
project for this course. And then you adapt that
to the real-world. One of the first
things that changes is the kinds of external
data sources you need. The fact that you can't,
you can't just be like shuffling a bunch
of plain texts, like key material around and like committing
it to repositories. That's how bad things happen, and that's also how
your security team just like makes your
life a living ****. So you're going to want to basically have
a way of getting secrets into your
Packer template. Essentially your Packer builds, and this is one
mechanism on Amazon. Obviously there are analogous
ones on Google Cloud on, I think even Digital Ocean
has a secret thing now. Obviously Azure, et cetera. They're all going to have
a mechanism like this. And of course they're all
gonna be slightly different, but largely similar. And this is how you
would use them. I think it's
interesting to look at other Cloud providers, AMI builders essentially,
or image builders. So I would always start
at the high-level concept in the overview, kinda figure out
what you're doing. And then o, like we want Azure Resource Manager to build and then capture an image. Now that you know what you
want, you dig in here, you look at, okay,
how does this thing authenticate? What do I need? How do we, how do you do
like identity management and access management
off management. What stuff does it need? Required and optional
parameters or attributes. Sandwich ECP again,
you see it's like a very similar kind of invocation will stay a
little bit higher level on the next one is I don't
want this to get too long, but I like Digital Ocean, so I'm gonna give them
a shout out here. One other thing that
I see here that's interesting is darker. I think this is one
you'll get a lot of use out of because I think this speaks to how flexible
these plugins make Packer because all
the packer stuff you've learned is still good. So if one day your
companies like, alright, no more VMs like we are
going full, I don't know, nomad Kubernetes, everything's gonna be
a container image now. That's fine. You can, you can reuse
a lot of your stuff and just have the output be
a Docker image instead. I mean, you're
obviously going to want to slim down that image. Nobody likes a 600 meg, like a B12 base image. But even if you go to alpine
or something slimmer, it's like you can still
use the same process and the same workflow and it
fits together the same way. And it's like you're
not, it's not a giant Greenfield project that still has a 100 bugs that you're enough to get
through before it works. It's just gonna be a slightly different builder that expects slightly
different inputs, has slightly different
outputs for those of you using Chef ansible
Config Management, kinda more traditional software. There are plugins for that. So if you already have
written a ton of ansible, you can reuse all of that. Ansible. Like you don't need
to convert anything, you just use your
existing playbooks. Bring them over, and wrap
this into this workflow. Again, it's like your
workflow is preserved, your workflow is consistent. And then you can have
Packer calling into different tools that
are actually generating the kind of doesn't
have to all be bash scripts like in
our example here. Do it all with Ansible and
just kinda convert it, wrap it in Packer, and then leverage
that power that Packer gives you to
give you a bunch of different outputs are
artifacts off of that. One more that I think is cool
is get as a data source. The data source allows you to basically fetch
off of a repo and then have some granularity with like how you navigate that
to whatever you need. I hope that is a useful
quick introduction to Packer plugins. I hope that you understand that what all this stuff is
that plugin is actually wrap together for you and abstract away so you
don't need to deal with it. In here. In these plugins. It's all like raw API calls against these Cloud providers, do this thing, then
do this other thing. Okay, if that fails,
handle the error somehow. It's like millions of lines of code in here that you
don't need to write. Really treat this left menu
here as well as a menu. You'll look down, see
what looks tasty, whatever you need to do,
whatever you're trying to do, and use those to put together the kind of build
or artifact that you need.
15. Course Project Instructions: Everyone, a quick
word about projects. I think obviously I've walked
you through a project, but I think the next step now
that you have the basics of Packer is to create a project
of your own from scratch, not following along
with the tutorial, not looking at
someone else's code. That's fine for learning the
very basics of something, but you've done that now. So where do you go from here? I suggest that you do a real life project to truly
look cement this skill and this knowledge in
your mind and to make it ready to use in interviews in real life to make your own life
easier, your job. I'm going to split this
into two categories. Basically, if you're a
beginner in general, you're more junior
engineer in general. And if you're more advanced
engineering general. And this learning Packer was just sort of like
the icing on a cake. That's already pretty nice. Just compared
engineering education to a cake, I think I'm hungry. So if you're a beginner, one of the first things
I would suggest you do is if you feel like not super confident that
you could do what I showed you in this
course from scratch. Then I would suggest
following along, sorry, instead of
following along, I would suggest to you basically delete the
packer directory and then try to recreate that packer project
from scratch, given the content in the
rest of that repository, which is from the the
hands-on Linux course, if that makes sense. So delete the packer
directory, basically, make a new one, and then start
porting everything over, kind of piece-by-piece. Look at each, all the
manually run commands, turn them into scripts. Take those scripts and use
them in your Packer project. Figure out how to
get them bottled up, figure out how to package
them and get them uploaded to your VM and
then kind of go from there. So sort of recreate this
project from memory. And then you have the content
right in front of you, but turn it back into
a packer project. That's what I'd recommend
for absolute beginners. I think you'll learn a lot because often after you
go through tutorial, you're like, Oh,
I totally get it. But then if someone's
like, great, show me with not
without looking at the tutorial and
people can't do it. It's like you often even just understanding something
in the moment is not the same as truly knowing it deeply and being able
to practically apply it. So get yourself
there as a beginner. Once you have done that or if you're already starting from a more advanced
engineering place. And what we did, like, added one new skill, but that new skill packer is like 1% of what you
know about tech. Great. I think you still also need a practical solo project
that you're not doing from a template or from someone else's work that
you're figuring out yourself, troubleshooting
yourself for you. If you're more advanced, I
would recommend a few things. If you live in a VM based
world and you want to continue what we did in this course like
machine images on, well, we did Amazon but you
could do Azure, Google. You know, the process is
more or less the same. Build yourself a
machine image from another open source
project that you enjoy. This course, we're
talking about WordPress as the sort of open source project that we're packaging and doing stuff with. But there's a
million open-source projects that are interesting, do all kinds of
different things. Web applications tend
to be kinda have that complexity of a lot of stuff getting
configured in an image. But you could do
other stuff too. You could do things like mumble if you want to
have a reusable kind of like Team Chat Server
image, there's a few. Just pick if you're
reasonably advanced, I assume you have
some idea of what open source tools and
projects are out there. What might be fun just to host and then build a
machine image around that. If you already live in more
of a darker eyes world, find something that is not
darker eyes at your work or even an open-source project and turn that into
a Docker image, right? So containerize it. Yes, you can create your own thing and make
it dark Docker file in its own project and then run the Docker build
tools and all that. But like Packer can do all of that and a whole bunch more. The nice thing about
packer is it makes that bridge really easy, right? If you have, if you
already have kind of automation or documentation on manual steps and
blah, blah, blah. You can just reuse all of that stuff, shove
it into Packer, and just tell Packer your output should be a Docker image. That's what I would
recommend for more advanced folks, find again, something either open
source or at your work and use some of those
other features. It could be darker, it
could be something else, but just explore what Packer can do that is
practical for you. That's actually
interesting to you. And do it on an application
that you're mildly interested in or have
to work on anyway. Yeah, I hope that gives you some ideas for projects
that you can do. I understand that. That means you might
not be able to show if you're working on your own companies like
closed source thing and you create a
packet project for it. Don't upload it and show me please respect
confidentiality stuff. But if you do it for an
open-source project, then just create a project, upload it to your
own GitHub account and feel free to share it. I think everyone would be
pretty excited to see that. Just to see more examples of Packer being used in more ways. And you might even
be able to get positive and constructive
feedback from people on how to
improve it even more. That's it for
everyone out there. If you see other
people's projects, be nice, be constructive. You don't have to comment, you don't have to give advice. We're all still
learning, so be nice. Alright, that
concludes it for b, what I think you should do as a practical project,
let that percolate, let that sink into your
brain and just start browsing open-source
projects and seeing if any of those
tickles, your fancy.
16. Conclusion: Congratulations. You've now seen what a real life Packer projects looks like. And I've taken you through
the process that I took to decompose a real-world problem
into a packer projects, specifically a Packer
template and kind of how you go from
just having a bunch of either manual steps or sort of automation
components like scripts. Take those and put them into the packer world and way
of thinking about things. I hope you immediately take this newfound understanding and knowledge and put it to
work on your own project, a project that I have
not laid out for you. I think that's the way
to cement this knowledge and kind of take it
to the next level and to really have it really durably in your brain
so that, you know, for when a problem comes up or when a job
interview happens, that stuff is just top of
mind and you just have this intuitive knowledge of the process and how
you go through it, and how you analyze
the problem and break it down and actually create the technical artifacts that are a packer project out this
course has been fun. Please. If you enjoyed it,
please leave a review. If you didn't, just
send me a message or leave a comment on the QA asking for whatever improvements you
would like made to it. I'm definitely planning on working on this course some
more if there's you could reception and if you
felt like something was missing or you'd
like a section added or something clarified,
please just tell me. I'm happy to replace the
videos with ones that are more clear for you and maybe
add some new ones as well. If you're curious
about something that I didn't go into in depth. Again, it's been a
pleasure making this. I hope you get
something out of it. And I hope to see you on YouTube channels,
tutorial Linux. And I do stuff like
this, but for free, or I suppose on Udemy
and my other course, hands-on Linux, where I go through your
WordPress hosting project to teach you
the basics of Linux, specifically in Linux
system administration, have fun on your journey. I'll see you out there piece.