Transcripts
1. Introduction: Welcome back guys. In this section we're
going to start looking at containers in the Cloud
or Azure Containers. So for this section we're
going to learn about containerization
and we're going to look at darker what it is, how we can set it up
and use it locally. And then we're going
to look at how we can containerize or
dotnet Core project. And then how we hosted using the Container Instances service provided by Microsoft
as a whole, we can host our container image using Azure Container Registry. And we will review Kubernetes. Know if this is your first time the containerization,
don't worry, this is an overview, but we're going to go into
it enough that you should feel confident to have a
conversation about it. Of course, this is not
an in-depth course on containerization. You can check out my
full course on that. But for this section
we're going to have overview of whole weekend, shift our containers from our local environment to
our Cloud environments. So stay tuned. I know you're going to have fun.
2. What are Containers?: Now before we jump into
our zero containers, that's a sticker
look at containers and what they are
and why we use them. According to a container
is a standard unit of software that packages up code
and all its dependencies. The application runs quickly, reliably and in any environment. And that is what I
paraphrase, what is written, but that is the explanation
and coming from docker.com. No. Why do we need containers? And this is a buzzword. You'd hear about it talking
about containers and containerization and why
we should shift to it. Let us examine what the existing problems are and
how containers can help us. No existing problems when
developing software. There is a cost of
virtualization. So we were just coming from
a section where we looked at virtual machines in the cloud and infrastructure as a service. The reality is
that, and I'm sure every developer can attest
to the fact that when you have an
environment and you're trying to develop against
that environment, you're going to need
virtual machines, right? So the organization
might not buy a machine per software
that you have. Instead, they're
going to go ahead and provision a virtual machine. Every virtual machine needs
to have an operating system, which means they're more
licenses that are required for each virtual
machine and then any supporting software
that you need for your particular application on your virtual machine or a
set of virtual machines. Those are other licensing costs. So you have to
consider the fact that virtualization is not a
cheap endeavor at all, especially when it's
being done in house. We also have the challenge of differences in environments
because when we provision a virtual machine and install a particular
version of the software, version of the
operating system on that particular machine
or virtual machine. Rather, that environment is
set up in a particular way, then it is very
difficult, not very, but significantly
difficult to replicate that environment exactly for Q0. And then by Accenture on prod, what we run into
as developers is a situation where
it works in dove and then there's some
slight version difference between dev and QA. It breaks in q0 and then even
when we fix the enqueue, it doesn't work out
of the box in prod. So there's always going to be slight differences between
the different environments. And those can cause,
I wanted to say, unnecessary headaches for us as developers and
even infrastructure. I'm practitioners know. Another thing is how easy is it for you to change
your operating system? Because when you set up an
application for dotnet core, you're going to need IIS, you're going to need
certain hosting supporting software based
on our Windows Server. But then if you need to move
over to a Linux server, then you have to consider, okay, you need to use in genetics or Apache or some other
hosting software, some other version of this, some other version of that. And it is, It's
hard to replicate the running
environment in windows to another, another
operating system. So those are some of
the problems that we do face when developing software, especially software that we
want to run on any platform. No benefits of using containers. One, we have portability. So Docker has created
an industry standard for containers so they
can be portable anywhere. And I'm going to talk more about what ports ability really means. Containers are also lightweight, so they share the underlying machines
operating system kernel. You don't have to go
ahead and provision a new operating system per application,
online virtualization, where every virtual machine has its own self-hosted fully provision machines, so to speak. Containers sit on top of
one operating system, but then provide a
virtual environment where just the software that is needed for the environment
can be installed. And then by extension, because of the ports ability, once we have that template, which we call a container image, we can provision it on
top of a Windows OS or Linux or MacOS and have the
same results every time. There's also the
matter of security. Now once an application
is in a container, it's safe because you
can always take it. You can lift it again shifted. And we can always store
or secrets so that nobody we can restrict access to
the content is a bit more easily than how we would
on a virtual machine. We'll also look at immutability. So an image which
is the template for a container will always be
the same when it is created. So once again, that
feeds back into portability because if there is a container image
for my application and it needs three
libraries for support, it will always give
me that environment. No matter where it
is provisioned. So I can rest assured that
once this is containerized, it will always be the same. If there's a new
version than I would have to create a brand new
image with this new version. That point, you
have an image for version one and an
image for version two. Version one will never
update the version two. So you don't want
run the risk of provisioning the wrong version because it is clear which version you are
going to be provisioning. Now, here's an overview
of whole Docker looks, and this is taken from
Microsoft documentation. So the left-hand side, you see that we have
the operating systems. It could be Linux,
it could be Windows, it could be Engine X equals
B Mac OS or whatever it is, whatever the
underlying kernel is. Thus the left. The Docker Engine
sits on top of that, and it provides
us with a server, a client, and a RESTful API. And all of these work
together to manage the images that are created. So once again, the
image is the template. And using that template
and Docker server, we can provision on containers, which you'll see
to the far right. So the container is the
actual instantiation off the template. And then this template can
be replayed multiple times. So we can have
multiple containers off the same application
for different reasons. So once again, the image
is just the templates. The container is the
instantiation of this template. Now I've mentioned the
darker several times. So Docker is a
containerization platform that is used to develop, ship and run containers. Docker does not
use a hypervisor. So unlike virtual machines, you don't need a hypervisor, so you can actually install
it on your desktop or laptop if you're developing
and testing applications. And best of all, it supports all major
operating systems. So regardless of the operating
system you're using, there's Docker support for it. And it does support
production workloads for many variants of Linux and
Windows Server Versions. Best of all, it is supported by many Cloud providers,
including Microsoft Azure. Now you'll also see when
looking up Docker, Docker Hub. So Docker Hub is a software as a service offering that
is a container registry. Essentially a container
registry stores and distributes the container images that
we create using Docker Hub. Docker hub, sorry,
you can actually host the image for your application
and you can distribute it. And it does support public
distribution on both. It also supports
private distribution. So even within
your organization, give me have internal
application images that you are maintaining. You can rely on Docker
Hub for privacy, but hosting as well. Now when we look at our
zero containers and service offerings
over on that side we have Azure
Container Instances. Azure Container
Instances allows us the Lord and run docker
images on demand. It allows us to retrieve
an image from a registry such as Docker Hub or
Azure Container Registry. So that brings us over to
Azure Container Registry. This is a managed
Docker registry service based on the Docker registry, the Open Source Docker
registry to point to. And of course this is
managed by Microsoft so you don't have to
worry about the versions. This is just the
current standard. And this offers us a private registry offering
hosted in Azure. And it allows you to build, store, and manages unmanaged, sorry your images for all
containers and for deployments. So between Azure
Container Instances as your Container Registry, you have a good basis to start containerizing your apps and storing the different
versions of images. And once again, all of that
can be done privately. And so you can speed up
your development efforts, reduce your
infrastructure costs, and maximize on your application on delivery using
containerization. When we come back,
we're going to start off by looking at darker hole. We can set it up and how we can create our first containers. So stick around.
3. Setup Docker: Now let's get
started with setting up darker on our machine and our journey starts
off at docker.com. So you can, in your
browser go to docker.com, and from here you can download the appropriate Docker
Client for your machine. So I'm using a Windows machine, but then you see you
can get an upward chip, Linux and Intel chip for
other types of machines that might not be based on
these operating systems. So you can go ahead
and install it based on how you
need to install for your particular machine if you are using the
Windows version, it's a straightforward install
it on using the wizard. And once it has been installed, you will be led to install the Windows Linux subsystem or Windows Subsystem for Linux, WSL subsystem that we had
mentioned prior to this. So it's on the Microsoft
documentation. If you just Google windows, WSL, you will see this documentation come up and you can
follow the instructions. So you can just go ahead
and run this using your partial or Command
Prompt as administrator. And it will go ahead and download and install
that for you. Now once you have Docker and the Windows Subsystem
for Linux installed, then you can launch
your Docker Desktop. So that may take awhile. I already have it installed. You can see I'm
even pending or not did but you can hit pause, make sure you have everything
set up before you continue. And when you do, then you can launch
your Docker Desktop. From here. You can look
at the containers. If it's the first time that
you're running Docker, then you're not going to have any containers in this list. You can also look at images. So from here you can actually
look at local images versus images that are
available on the hub. So we did mention Docker
Hub in the previous lesson. So what you want to do
is in your browser, go over to hub.docker.com and go ahead and create an icon. I'm saying that I'm
going to say no, just to show you
what you can expect. And then you can get
started today for free. And once you have your cones, you can access the
different docker images that are made available
for public consumption. No. I'm saying I
send back in here, I'm looking at my
own registry where I have a patient's database
image that I had created. It's available to the public. It's not very useful. I actually created this one for my book on microservices
development, where I demonstrated how you can containerize different parts of your microservices application
so you can grab a copy of that book if you want it from the microservices point of view. But the point is
from Docker Hub, I can go and look at
different registries. So these are my repositories, but if I click on Explore, then I can look at different images and
you'll see that there are several thousands of
images available for use. So I can use any one of these. Maybe you need a Redis instance, you need a postgres
SQL instance. And popular, very popular
applications have been containerized and are
made available to you as container images for you
to instantiate images. Costs are containers
on your own machine. Now bringing it back
to Docker Desktop, if you go ahead and
sign into the hub, know that you have
created a Arkon. Then from here you will
actually get access to your own
repository as needed. So here is where I would
have packaged my container and I published
it to the hub all using Docker Desktop
as the tool. Alright, I don't once again, I can look at all
of the local images that I would've pulled. And once I create a container, I have access to it afterwards. So you can see here I have
a RabbitMQ container, I have a Mongo DB container of two containers for
Microsoft SQL Server, and I have one for 3D sketch. So when we come
back, we're going to look at whole weekend global. It's bringing in an image and
creating our own container.
4. Create a Container: Now as it stands, Microsoft
SQL Server is almost exclusively only usable
on Windows machines. Now if you're using
Mac or Linux, don't worry, there's
still a solution for you. Solution number one would be that you can use
a virtual machine, so you can use VMware or some other tool that
supports virtualization. And you can spin up
a virtual machine that has a Windows OS
and then use that. Know that can be
resource intensive. And I'm not going to put you through all of that just
to use the software. The alternative to a
virtualized environment for Windows would be to use doc. I would encourage you to
launch your terminal. So once again, I'm using
a Windows machine, but your terminal on Linux or Mac OSX would look
very similar to this. And you can simply run the command docker just to
make sure that it's installed. And if you see something
looking like this, they didn't know
you have access to the Docker CLI commands. Alright, so what we want
to do at this point is run a command that's called
Docker Pull that is zoom in a bit so
it's easier to read. So we're going to do
docker, pull, docker, pull. Then we're going to pull
this what we'll call image. So Docker has
predefined files that line the environment that is needed for a
particular application. And these are called images. The image that we want is the Microsoft MS
SQL Server image. So we're going to do a docker, pull against that image
so you can hit pause, make sure you type it in
just the way I have it. And when you press Enter, it is going to go
ahead and say, Okay, I'm getting liters and then you're going to see
it downloading. So I already pulled that image, so I already have
it on my machine. But you're going to
see it's pulling. And then it's going
to start showing your metrics of it done loading. And they would actually look
something more like this. So this is a screenshot I took earlier from when
I was downloading it. And you're going to
see that it's going to spawn up these bunch of lens
looking similar to this. And you're going to have
these downloading tags. Once that is completed, the next step is to
actually run it. To run it, you
need this command. So we're going to say Docker
run and then hyphen e, and then we'll do
the eula accept. So what happens is that SQL
Server usually it has one of those documents that you need to accept the
terms and conditions. So we're just putting it
in a parameter that yes, we accept the terms
and conditions. And then another one
that says SA password. So if you looked at the installation process
when we're using Windows, we can use Windows
authentication to connect to the
database, right? So all we need is a machine name and we can use a Windows user, the current Windows user, Windows authentication
and disconnect. Now because this is darker
and it's a container, There's no windows or doors or Mark or there's no
linux Authentication. So it's not really that
you can just connect using the default user
on your computer. So this step is
applicable whether you're using Windows,
mac, Linux, etc. So what we need to do is
specify an SA password. So ESI is the default user, which means system admin
assistant administrator. Every time you install
a database engine for Microsoft SQL Server, you get this essay user. So we're going to
set up this SA, password and you can put
in any password you wish. I'm just putting in a
strong password here. And this is really possible
that you might see in other demos that you might watch anywhere on the Internet. So this password is not necessarily unique to
me or to this exercise. You can put in any
password value that you feel comfortable with
and you remember. So I'm just splitting the word strong password, of course, with special
characters, numerals, and a couple of capital letters. Then we specify the port. The port here at the front is important that
we want to go through. The port. On the other end of the colon is the port that it will map to. So what this means is SQL server by default broadcasts
from port 1433, that's the default port. So without doing anything, specifying any ports
or anything will always go through 1433
wants are connecting. However, Docker is running in its own specialized
environments, so we need to mop. This is the default port, and then this is the
port from war machine that will want to tunnel
through to get to this sport. So you could actually just
leave that as 14334233. If you don't want to have
SQL Server installed already and you're
running Mac and Linux, then 143 is, 33 is fine. You don't have to
do anything extra. You can just connect. However, because I'm
using a Windows machine, I had to change my port because
1433 is already occupied by my native SQL
Server installation. So I'm just showing
you that you can do 14331433 by default. Or you can put in your own specific port
if you so desire. Then the next. Final thing is we see hyphen D and then we specify the image
that we want to run. So basically we're saying
docker run this image and being sure that all of these parameters
in-between are configured. That's essentially
what we just did. So when you press Enter and
you upload this one to run, what it will do is launch that image inside
of the darker UI. And in the UI you're going
to see that, you know, have under the containers
tub a container. And you could have given
the container a name. We didn't specify a
name in the command, but you could have put dash n and given it
a specific name. So when you fail
to provide a name, you'll get a random name, like what you see here
with these names, at least Docker you, I will indicate the kind
of image that we're using. And you will see over here the port specifications
according to what we had configured. So like I said, you can do alternative ports and this
would be good if you have several containers of
the same technology that will block UP
articular port running. But if you don't
have the technology installed like with my MongoDB, I don't have MongoDB
running on my machine, so I'm not going to use
an alternative port. I'm just going to say
use the default port, map it to the default port and act like the actual software. For SQL Server. I do have several
instances running, and I want this instance
of a specific port so that I can turn out directly to it when I need to connect. Now, let us look at connecting. So to connect, I can use any of my SQL server management tools. We've already looked at
some of these tools. Just to meet this
week, I'm going to use the Data Studio and to connect. I'm going to go ahead and
create a new connection. And for server, I'm going
to write local host node. The darker is going to
run at local hosts. And that's why that port
number is important. Because typically localhost
command 1433 would be the way to connect to the it locally installed
SQL Server instance. If, especially if you're
running professional or enterprise or
developer tier, right? So local host with the
port would connect me to the default installed
instance on my machine. However, because I want
the darker instance, I have to go to local host and the specific port that I had given it during
the container setup. For authentication, I can't use Windows authentication
that we established. I have to use SQL
authentication. And I'm going to say essay, and then the password. I forgot. So here's a quick
way that if you forget these environment values, you can retrieve them
from the Dr. UI. You can just click on
the running container and you can use visual cues
to know when it's running. Here it's running and you'll see logs happening in
the background. You can go to Inspect, you can go to terminal, etc. If I go to Inspect, I'm going to see all of
the environment variables that were configured. So we did the eula accept eula. Why? That's an
environment variable. Here is the SA password. And look at that. I have the password via
available to me right here. Anytime you do these
setups and you set environment variables
and your mind forgets where all human, you can always just jump over
to the running container, go to inspect and see the
environment variables. So I'm going to jump back over here and please the password. And then I'm going
to click Connect. Now I'm getting this error
that it was successful, but it needs a
trusted connection. Okay, So I was just Shows server certificate
and there I am. I'm now connected to the SQL Server instance
running in my container. And this is a container that's actually used in another course, asp.net Core cross-platform
development, which is of course where
I actually teach on hold. You can use asp.net
Core to develop a solution in any environment. So we do a little darker, Isaiah Sean in that course. And I'm just showing you that. I'm just sharing that with
you to let you know that that's why this database exists. So you obviously would have no databases if you just
created this container. This container works like any
other SQL Server Instance. And as long as the
container is running, then you can interact
with it just the same. You can connect to it, you
can build apps to use it. And guess what if I stop it? So I can stop this container
at this point, right? If I tried to do
anything else here, like tried to go
into the database, notice that my Data
Studio is going to hitch its hitching because it has lost connectivity with the
database, with the server. So as long as that
container is running. So that's why we said that
when you install a software, you might not necessarily
want it to be pervasive and running
all the time. You want to stop, stop it
and start it on demand. That is where containerization
can play a big part in helping you to be
as efficient with your system resources during
development as possible. Now, know that we have some
experience pulling an image, setting up our container. And this, these steps
are well-documented, so I'm not making this stuff up. You can find all of this. Once you find the image you're interested in on Docker Hub, you can click it and see all of the recommended
ways that you can set it up on all of the
environment variables that are needed for the
configuration. But now that we have
seen how we can use a third party image,
lets us know, set up our own dotnet
Core application, look at one whole weekend, connect to this
image and to look at how we can containerize
an application.
5. Setup .NET Core Project: Now let's create our own
asp.net Core project. And in the future we're
going to containerize it. So of course we're using
our regular solution. I'm going to add a new project. And this time I'm going to
go with an API project. And I'm only choosing an API in keeping with the theme that containers and
containerization plays well with microservices
architectural design. So I'm going to jump over here, create our new API project. I'm just going to
call it microservice. One API adopt
containers that demo. So you can reuse the name. Of course, you don't have to. And then I'm going to hit Next. Now, in this step, we get to choose our
authentication type of framework and we can
choose to enable darker. So I'm not went to enable
darker here we're going to add that manually or use a
different method of adding it. But for the second microservice, you'll see the
difference when we do that because we want
to keep it simple. I'm just going to do a minimal
API and we can hit Create. Now of course, if you're using Visual Studio Code
and the dotnet CLI, you open your terminal
and you're going to type in the command dotnet new web hyphen 0 for the output and give that projects its name. And then once that's done, you can cd into that newly
created folder and do code full stop to launch Visual Studio Code
in that folder. So now we have our
standard Web API project and we're all developers,
so we know what to do. Know is that rare tumor
beautiful code and we connect to our database
and all of those assets. The thing however,
is once again, when we have to move
from our machine to the dev environment
and we deploy, there might be differences
between the environments that make it work on your machine and then it doesn't work on def. And then we end up
with excuses like it worked yesterday or we say, well, it worked on
my machine on he's going to use your
machine in production. We all know this. That is why
containerization makes our application
far more portable and far more stable regardless
of where it is deployed. Now, I'm going to add container supports to
this applications. I created the application
and let's serious, we wrote off the beautiful
code and everything. But we know we need
it in a container, so I'm willing to
right-click on the project. And then I'm going
to go down to Add. And then you'll see
here docker supports. So when I click Docker
support is going to ask me, okay, What is the target or if I wanted to
leave it as Linux? Well, obviously you could change between and you do want
to make sure that you choose the appropriate
environment based on near dependencies on
the library choices, because not everything
works on Linux. But for now for
this simple app and dotnet Core is cross-platform. I'm going to use Linux. I click Okay, then we get this new file called
a Docker file. So this Docker file
wasn't there initially. But let us just assess what
this Docker file is doing. And it does look confusing, but when you read it through, you'll see that okay, it is something that
can be understood. The first thing it does
is pull a base image. So from and then it's
saying the image name. So remember when we did our
docker pull for SQL Server, we have to specify
a path similar to this where we said
SQL Server instead, we did not specify our tag. So if you want a specific
version of an image, meaning this is the
image, but then colon. Then the value after
the colon means the tag or the specific version
that you're interested in. So we're doing an
asp.net application. So obviously we would
want the image to be dotnet seven asp.net image. And then we're seeing as base. Then it's going to specify, okay, once you're
creating that container, creates our work
directory named up exposed ports 84 for three. And then from this SDK image
where we want to build. So once again specify the work directory
to be SRC and then copy the files from the
CSP directory, right? So you see here, these
are just the paths. So copy this CS broach
file from this path pretty much and then do a
dotnet restore command. So here you start
seeing that it's just using the same dotnet CLI commands that we use is just doing it
on our behalf, right? So it's doing a restore against that CSV file and it's going to copy everything in that file. Then it's setting up
that work directory to be once again there. So if I go through
it line by line, sure, it makes sense, right? Essentially we're
going to run again dotnet build as release. And the output is going to be. Slash build. These are directories
that we don't have to create and they're not being
created on our machine. Instead, they're going
to be created in the container that is doing
this compiler compilation. And then it's going to say
from this step, right, after this step is completed, take whatever it is
called it published. And they were going to
run the publish command, which is going to do the
same kind of release. Take what is in that
published folder. And it's going to give it a little flower here to say
use up horse equals false. Alright, once again, this
is generated for us. We don't necessarily have
to offer this file. We can. It's good to understand it, to modify it, but generally
speaking, you don't have to. And then after all of that from base is going to see
the work directory is up and it's going to copy everything from publish
that directory. And then the entry point
is going to be that DLL. So dotnet Core apps usually
run based on that DLL. And that is what starts up the application running
in the container. Now, how do I know that this is going
to run in a container? So let me switch over toward the startup project of the microservice that
we just created. And you'll see that the
star button is nosy and darker with all the other ones. Like if I go to the
blazer to HTTPS. And we already
looked at the lawn settings and we know that the launch profile
has the HTTP, HTTPS. But if you look at this, you'll see that now there's
a darker entry here. And the darker entry is going to have that launch URL
with the swagger. So everything gets
configured for you once you added
the Docker file. But basically it's
going to go ahead and launch darker, right? So when I go over to this new startup project
and see run with darker, my application will launch
just like I expected it to, right, so it will
launch in a browser. And I'll be able to go ahead
and use it like I expect to. What evidence do I have
know that it is running in a container and not just
being run by Visual Studio. Well, if you look at the
Visual Studio window, the new panel would
have appeared. So you probably have
never seen that before. Containers. And in this panel you see that this container or the
application that is hosted are being run is here as a container that
is being run, right? No, you'll also be able to look at all of the
environment variables. So you'll see the
version development, all of those environment
variables that we didn't sit, sit, but they're all there. We also have ports. So you look at the port here, you'll see that it is
creating up ports. Remember that we had to
create a ports to tunnel. So 21433 when we were doing
our SQL Server image. So you'll see here that we
have a port that is mapping to port 80 and port
mapping to port 443. Either one, I can
click and bras. So when we look at the
address in or Swagger UI, we will see that
we are on the 769, which is mapping to port 443. You'll also notice if you have darker UI open and running, that you have a new container with the name of the
application also running. And it is showing you the port. And you can click
to show all ports. So these are all visual
cues and evidence that we have no
containerized application. And this is a microservice. And once again, it's not
limited to micro-services, but it is recommended with microservices
because it allows each microservice to live in its own environment independent of all other services with its own dependencies and everything that it needs
to run effectively. And once again, it is portable. If you're using Visual
Studio Code and you want to containerize the
application that you have here, what you will need to do is go and get the extensions
for darker. So if you just go to Extensions and you can just search for
docker, have it installed. I was trying to
scroll and find it, but you can just search
for darker and go ahead and install the
Docker extension. Make sure that you're
getting the Microsoft one. And then once you have that, you'll get this tab that
allows you to see all of the containers
and it gives you a similar well, I don't
want to say similar, but it gives you
enough information compared to what you
would have experienced in Visual Studio about
the running containers, as well as different
images that you have available to you if you have
pulled on several images, lets us containerize
this particular app. That is that we provisioned
in our Visual Studio code. In Visual Studio Code, hold down Control Shift and P, and you'll see traitor
show all commands. So Control Shift
N P and that will launch or Command Palette. And then you'll see here that E have the option to
add a Docker file. If you don't see that option,
you can just start typing Docker and it will
appear regardless. So recall that adding. Docker supports is really just about adding a Docker file. So if we select
that and then I can specify the type of application
that I want to support. So it's an asp.net
Core application. And then we can
specify the OS once again and specify the ports. I'll leave that port. That's fine by me. I can choose if I want
a docker compose file, we're going to look at
docker-compose later, someone to say no for now. And then it is going
to go ahead and generate that Docker file for me using a very similar syntax to what we saw just now
in Visual Studio. Now to run this API in darker
from Visual Studio Code, we can go to the debug tab, and then I can choose
from the drop-down list, the darker dotnet lunch. And then go ahead
and run it again, see some activity going on
down here in the terminal. And if you look closely, you'll see that the lungs
are very similar to what you would might have seen in Visual Studio when
it was running, but it's running some Docker
commands in the background. And then after some
amount of time, we're not going to get
our darker up running. And we see Hello World. It was as simple API endpoint that would just
return HelloWorld. If we look in our darker UI, you will see the new container running as well with the name. Alright, so here
we have the port 32771 running
again, sports 5099. So we could have actually
specified that we wanted 84 for three for this up. I could have said 80 comma 443 when it asked
me about the ports. I didn't do that, but I'm just letting you
know that that was an option that would have given a similar mapping
as we have here. So now you know how
to containerize your dotnet Core
application using both visual Studio and
Visual Studio Code. Of course, we're still
running in debug. So I'm going to
stop the debugging. And that allows us
to actually debug or apps while they're
in a container. Now, if we just go through
the steps again to add a new, let me add a new project. Sorry. When we're adding the project, we had the option to
add darker support. So I'm just going to click
on a random one here. And we had the option
to enable Docker. So all of this would have done was allow us to
choose the OS here, and then it would
have generated or project with the Docker
file already provisioned. You can do it from
the get-go if you know you're going to
use it darker or you can easily added afterwards when your ambitions
reached to that point. No, it's done. It's seven. There are dotnet commands
that we can use to actually publish our app into a container without the
need for a Docker file. A Docker file can be
used for any version of dotnet and it is universal. You don't have to worry if
you're using a Docker file. But when we come back, we're
going to look at how we can use native dotnet commands
to containerize or up.
6. Containerize .NET App: Now in this lesson we're
going to look at how we can containerize a dotnet up using dotnet publish or the native dotnet CLI commands. Now this is unique, at least right now at
the time of recording, to dominate seven,
and I'm sure it will be available
in later versions. So far, dotnet six and below, you still need the Docker file. However, if we're pressing ahead with its seven, then
we can do this. I'm just going to go ahead
and create a new project. And I'm just going to use
Visual Studio for this demo, but you can feel
free to follow along using Visual Studio Code. Most of it is going to
be command line driven. But I'm going to create
another microservice APIs. I'm going to call
this micro-service to api dot containers that demo. And let's go ahead
and create that. I'm not adding Docker support. I'm not changing anything
about the previous setup. And now we have our new project. So this new project, we need to add a
new package, right? So we're going to use
the dotnet CLI command. I'm using dotnet CLI
because everybody can use it whether you're
in Visual Studio or Visual Studio code. So I'll just launched the
terminal here in Visual Studio. And I'm going to run
the command dotnet add package and we're adding microsoft.net dot
build dot containers. So go ahead and
install that package. Now with that package installed, we can verify by
just clicking on the CS project file and
you'll see it here. Now, when we want to give
our container a name, there are times
when we might have invalid names as projects. So if you wanted to
change the name, you could actually
add a node here in the Property Group
section off the CSB Raj. And this would be
container and get my spelling right image Nim. And then with this node, you can actually give
it another name. So I'm going to call it
a macro micro service. Dash to write just something. It's microservice dot to dot api dot Clinton
is that demo. I want to discontinue
it to just be called microservice dash to write, just to show you
that we can rename the actual image before
it is published. Now when I'm ready to publish using the dot nets,
yellow ones again, I can say dotnet publish and I am going to specify
the operating system. So dash, dash OS, and I'm just going to stick
to Linux and dash, dash arc. And we're specifying 64 to
show that we want a 64 bit. Then we say slash t publish container and then
slash C release. So with all of those,
what we're doing here, we're seeing that we want
the release configuration, that's the dash C. Once again, the dash art is specifying
a 64-bit architecture. And we are specifying that we
want an OS, Linux-based OS. Know when that command is run, you're going to see I
printed off a few logs, but go ahead, run it. Hit pause. When it's done, we
can assist together. But most importantly, you're
going to see that to build the image with the
specified container name. If we didn't specify that
container name node, then you would have gotten
an image similar to the project name which
we've seen before. So those are just little nuggets that you can bear in mind. The fact is that whenever we need to configure this
image similar to whole, we have the Docker file
setting up all sorts of environment, ie
false, and so on. For the container, we can actually specify a
different nodes here. So if I wanted, let's say we're using
dotnet 6.7 are, I just wanted a
dotnet six Runtime. This image, I could actually
come here and specify container image and then put
in the name of the image. So let's see if I wanted
the runtime for dotnet six. I could see MCR, microsoft.com.net slash runtime, and then the tag for 6.0. Alright, if I wanted
something else, like if I wanted to actually tag this image myself,
I could specify. And I'm just going
to remove this one. It was an example, but I could specify container image tag. And then this would allow
me to set my own version of this particular hunting
or image that I am pushing. Let's say after a bug fix, the original image was 1.2, 0.0. But maybe I did a bug fix. I know I want this
to be 1.2, 0.1. And then if I rerun the
command that we're going to see that we get a brand new
image with that tag version. There we go. So I just rerun the command. I know I have the new image
with the tag 1.0, 0.1. Now do remember to save the CS Bridge file and this
is why I have so many runs. It's happened to me the
first time I didn't see it. Whenever you make a change
here you have to see because this is not a build operation, it's unpublished operation. So if you don't save, then it won't see the change is to be able to go ahead and push. There are several
other things that you can actually add when you are configuring your app
image from this point. So if you want to
have multiple tags, you can actually see a
container image tags. Notice the S different
from toggle. And you can specify
different version names using alphanumeric and semantic
versioning techniques. And you will just want
to use a semicolon to separate each version tag. You can also add an item group where you can add more metadata. So you can configure
container ports that you want this container to have
once the image is created. You can also create a container
environment variable. And this, we've
seen this before, especially when we're setting up our SQL Server container where we had to accept
the license agreement. So we just go ahead and
create this variable. So every time the
container spins off that, that variable value
is available. Here where it sits in the
logger verbosity to me trace. This means that this container
is supposed to spit out as much information as is generated by our dotnet
Core up, alright, and we know that core
ups can be a bit chatty, so it should be
spitting out all of that to the container console. Another example of an
environment variable that you may want to set would be one for ASP NET Core environment. So here, when we're
running it locally, it's going to come off with development as an
environment variable. However, when we do a publish and we're doing
configuration release, that variable is going
to be production. We could actually override that default setting and see when
you're in this container, I want that environment
variable to be development or QE slash
staging or production, etc. So that's how you can think of the use of
environment variables. You can also add labels
and labels just help with the metadata and the listing
when it's in the registry. We can also specify
the registry here, but we're not quite
ready for that yet. So with all of that done, I can do another push and create that image
or update that image. So these once again are
images not containers. Now I want to
actually run my image now to make sure that
the image exists. I can jump over
to the darker UI, go down to images. And then you'll see
here that you have all the versions
of the images that you would have published
should be here. So you see here that's
1.0 point, 0.1, 0.0, 0.1. And if I select either one, you're going to see this
section called layers, where it's all lines, the different steps that
should be followed to finally build the container
for this application. If I go to images, you'll see that it is using the DBN image because
I said I wanted Linux and it's using my
image that I had created. So to run this, we could actually click on
the image and then click run. Actually, I don't like
to do that, right? I actually prefer to
use the command line. Now beside her on you'll
see pull and push the hub. So this is our local image,
so there's nothing to pull. But if we do a poll, it means you'd get the updated
version of this image, especially if it's
supposed to be the latest. So as you've seen, we have been able to update
the image with the same tag. So if there are updates
on that version, we can always go pull and
it will do an update. We can also push the hub, which means that this is where
we're going to push from our local repository
to Docker Hub. I would have shown
you already, whoa, that looks at least when the IVR continue
to push the hub, you can see your own
container in the hub and you can pull it back down on any other machine as
needed and start working. So that's how easy it is to memorialize your container
when you have created. Let us go ahead and run
this new container. So what I'm going to do is launch a terminal window and
I'm making this one bigger. So I apologize if using the one in visual Studio
hurts your eyes. What I hope to know. So I'm using a terminal window. I've already gone over
to the directory where our microservice
tool API demo sits. And what I'm going to do here is run a command that
says docker run. So we've seen this docker
run command before, right? I'm going to specify dash, IT, specify a port. So I'm going to tell this
that I want it to run. On port 80. 80 and that should turn out through to port
84, yellow traffic. I don't want to complicate
anything and use 443. And then I'm going to specify
the name of the image. So microservice hyphen tool, and I can specify the
target if I want or not. So let me just go
ahead and hit Enter. Let's see what happens. So you see on able to find
image with the latest. So it's very important
that you go ahead and set that latest tag on whatever
should be pulled by default. So let me go ahead
and specify that tag, which is 1.0, 0.1, and then try that again. And this time it's
actually going to spit out some logs that look like asp.net Core application logs. So what happens if I try
to reach the application? So we did say that the
application should be live at port 80, 80. So when I bring up
a browser and go to local host port 80, 80. And then I put in
weather forecasts, which is the end point
that lives at that API. We see that we're
getting back our API. And we will see as
well that we have a little message coming up about the HTTPS redirection,
that's fine for now. If we look in the darker UI, will see that we have a new
container running as well. And this one is, I don't know what that name
is, but once again, we could have
specified the name in the Docker run if we wanted to. If I click on this, you'll see that it's also
spitting out logs around here. So you can watch the
logs from the console, you can watch the
logs from here. And actually the
reason it's spitting out to the console
that started it. Let me just stop that and
redo the run command. The reason it taking over the console is because we
didn't specify dash d, which means that we want
it to run as a daemon running in the
background right here. It's, it frees up the console. Alright, so that's another
little tidbit that you can use to free up the console when you want to run
your Docker container, but you don't want it to
take over the console. So here the Docker container
is running once again. I just had to hop
out and hop back in. And you'll see it's
running once again. And we can view the console from here without taking
over our local console. So now you see how
you can containerize your dotnet Core
application using your dotnet CLI commands
and certain configurations. Read up on the documentation
you can play around and you put in different configurations
that resonates with you. But I'll leave that up to you. Now when we come back,
we're going to look at how we can use docker-compose to handle the orchestration of multiple apps that
are containerized.
7. Orchestration with Docker Compose: So we already have an idea of what container orchestration is. Container orchestration
basically means that we have several containers and
we need to be able to orchestrate holiday start. Why they start? If, if one depends on the other, which one starts first, etc. So there are several things
that we need to setup for our containers before
the word actually begins. So if I have, say, a microservices
application, and all of these microservices exist in their own containers. But for the application to run, all of these apps needs to be up and running simultaneously. Then we would want to
make sure that we have a very repeatable way of having all of them
run at the same time. So that is where Docker
Compose comes in. So I'm just going
to go ahead and add darker supports to
our second API. And just by
right-clicking and doing, following the wizard, of course, you know, to do that in
visual Studio Code as well. But now we have
Docker file existing in both of these apps. Now, when I right-click again, I can actually add Container
Orchestration supports. So when I click that one,
it's going to ask me, okay, What orchestrator
do I want to use? And by default I'll have
Docker Compose there. So I'm just going to
go ahead and click OK. Confirm that I want
to use the Linux OS. And this is going to generate a new project with
some new files. The first file here
is a darker ignore. So similar to our
gitignore file, there are certain
things that we don't necessarily need to bring over with the
container and everything. So it's just saying
ignore all of those and compile
everything else. We also have the lawn
settings that JSON, which gives us a Docker Compose launched sitting AUC here. Now it's taking over.
We can just say docker-compose and run that one time here in Visual Studio. So with that, we can just spin up whatever the Docker
compose file says. No or Docker. Docker
compose file has two parts. We have the darker hyphen compose and we
have the override. Or override allows us to specify a certain settings that we
want on each container. So here, let me to go
from the main file first. Here I'm seeing that I
will the version was set. I'm not willing to change that. And then we have a
section for services. And then under services, we have our first service
which is the microservice, one that we had specified. We want to add orchestration
and support for. So he's going to say, well, when I build out this container, I'm going to call
the image whatever that generated name
is, hyphen that name. And then I'm going to build
using the context well, wherever the project is and the Docker file or the setup instructions on
how this container should be generated exists in the Docker file
in our project. So for as many applications
as you have that you need to spin up at once
when you are developing. You can add orchestration
and support. So I can also add
orchestration and support to the second
microservice and the third n to the
n. So if I go ahead and do that again and
specify the same settings, then you're going to
see that I now have a second microservice
here in that file. Nice and easy. So if I jump over to
the Docker compose file and it grew because now we have over as for two different services and as
many services as you have, you may have overrides R-naught. But here what we're
doing is specifying that the environment should
be development. We can always change
that based on our needs. So while we're in Visual
Studio, of course, that environment
variable will definitely want to be in development. We can specify that we
want HTTPS, HTTP ports. There we go. Then we can specify volumes. So a volume in darker orange, containerization on
basically is an era for storage, for persistence. So when the container
is running, we don't want to
lose the data. I. When we started
up the container, it should remember the
last place it was. So this is especially
important for database containers and ready
sketch containers, etc. So when we say volumes here, we're just saying
that we want to store the user's secrets. We also want to store certain other
configurations, alright? So whatever those
configurations are, please persist them even when
a container is not running. So now that we have
a quick tour of what our containerization or Docker
Compose really looks like, or Docker Compose
and override files. Let's go ahead and
see what happens when we run with Docker Compose. So I'm just going to hit Run. And in our browser we have our first micro-service running. Now, the first micro-service was created with Swagger support. We see how we can get
to that endpoint. But if I go back over
to Visual Studio, you'll notice in the section
with the containers, you'll see that I have the
microservice one container, so I have an
existing one for it. So it's creating a
brand new container based on the docker compose. And it is creating a container for the second microservice. And just by clicking
this one time, it was able to launch both of these services that may or
may not depend on each other. If I looked in the darker UI, you'll see that you have a new Docker Compose
section appearing. And this Docker
Compose section has both containers for
the microservices. It also has the different
ports for each one. So I can easily go ahead
and browse to either one. And I can just stop all
of them using one stop. I also could have,
probably should have stopped it from Visual
Studio as well. So now that we understand how Docker Compose and
orchestration works. And there's another
level to this where we would introduce
Kubernetes 0s, which does much more than
just spin up containers. But we'll look at that
briefly later on. In the next lesson, we're
going to jump over to Microsoft Azure
and we're going to create our Container
Registry service. And then look at how we can push our containers to that registry.
8. Azure Container Registry: Alright, so let's just
jump back over to our portal and we're going to go ahead and look
for the Container Registry. So that's a search container. And we want registries. And we're going to create
our own Container Registry. So as usual, we're
going to go ahead and fill out this form. So I'm putting it in the
usual resource group. I've given it a name. And note that it has very
strict naming rules. So hyphens, I'm not a load or special
characters are not allowed. So I'm calling mine
isn't course ACR. And choosing the best
location for me, and I'm using the basic skew. So this will cost a
little bit of money. So be aware of that. So let's go ahead and
review and create. Once that's done, we can
jump over to our resource. Now we get the
usual dashboard and it's showing us how
much storage we have, how much of it is used. We even have a public
URL to our registry. So remember that while it's a public URL,
registries private, so using Microsoft as yours,
built-in user management, and whatever it is that you're using in
your organization, you can add your own security to your private registry
to make sure that developers can pull, push and patch container
images accordingly. We also have the repositories
that we can connect our tools to be able to
manage and pull and push. So if I go over to access keys, I'll be able to login
as an admin user. So here's a registry name that's the login server and using
the admin user credentials. So now that I have the access to the username and password
at the admin level, I can launch my terminal. And I'm going to
run a docker login, followed by the name of
our login server address, which is in my
case, is it horse, ECR dot as your CR? So I'll go ahead
and press Enter. Then it's going to ask
me for the username. So I'm just going to copy
and paste that part. And then the password, which of course I'm just
going to copy from the portal and using the terminal
and press enter. And then I'll see here,
logan was successful. Now I just cleared the
screen so we could have a fresh slate. But an alternative to using
the docker login would be to use the bash
command is a login. Once you do that, it's going to take you
through that off, zero off. Authentication
against Microsoft as zeros and ones you're
authenticated. You can then see,
is it ACR login and then specify the name of the registry that
you're connecting to, which would be, is it coarse? And you don't have to put
on the full login server, just say the name of the registry and then that would log you into the
registry just the same. So those are two ways
that you could actually go ahead and authenticate. Now, let's us look at
pushing over image. So I'm going to use
the docker image for our microservice tool only because it has
a shorter name, it's easier to just
use that name. Alright, so
microservice dash two, that is what we're going to be pushing up to urge history. So we're going to start off
with this command docker tag. And then I went to
specify the name and corresponding
version because we didn't talk about elitist. So I'm just specifying
the version here. And I'm tagging it with an alias relative to the registry address that I want it to be found at. Alright, so what does this mean? It means that I'm taking this local image
and I'm going to push it to this address
in this repository, and it should be called this. Now, by failing
to specify a tag, it will automatically be
branded as the latest. So of course, if I wanted
to keep the same version, I can just go ahead and
specify the version according. Well, I went to leave it
version of this funnel. So when I do this
tag, I press Enter. The second step would
be to do a push. So I wanted to say Docker push. And then I'm going to push the null aliased
container image. Alright? And then this is now going to once again connect
to our registry. It's going to use the
default tag latest and it's pushing it to that address. So this registry slash, this what we'll call a
repository and that name. So once this is completed, if I jump back
over to the portal and look in repositories, I'm not going to see that I
have this as our repository. And when I click it, I'll see that I
have the latest tag here associated with this image. Alright, and then from here I can actually do a
poll if I wanted to and pull this in from
the registry at will. This is a nice way
to keep your keep your applications
containerized and in a registry so that
developers can come by and just get them
when they're ready. And you see you when you have
a team that rotates and you have complex environmental
setups, this is very, very important and easy to use because then
they can just pull down these images and have the application running on their environment
with minimal setup. So if I copy this and then do a docker pull right afterwards, I'll just relaunched
the terminal and paste That's a darker pool
and it's pulling straight from the repository with the
tag latest and press Enter. Then you can go
over to the list of images in darker
UI Docker Desktop, and you will see that image
here available for use. Alright, so I can just click
that and spin up the run the application or use my docker run command
which I prefer to use, actually start using it. Now another way that you can see all the images that
you have is to use the command docker images. And that will list
out all the images that are currently available, as well as their IDs. So if I wanted to
remove an image, let's say I wanted to remove version 1.0 from my computer. I can take that ID value. Then I can see a darker RMA, which is short for remove
image and paste in that ID, and then it will go ahead
and delete it for me. Similarly, if I'm
using Docker Desktop, I can always just go to the
image and press the button. So you see you can
balance between the UI stuff and the commands. But at least now we
know how to push our image to our registry
on Microsoft Azure. Then we can do as
many pools as we need against that registry. Once again, this is
great for organization. All four containers on
different applications. That's our development team may need to access
as we go along.
9. Azure Container Instances: So now let us jump back over to our portal and go over to our, to our Container
Registry service. Now we have the concept of actually provisioning the
image as a container. We've done it locally, let us do that in the Cloud. Now there are several ways. The easy way that I'm going to show you
in this lesson is to use the as your
container instances. So that's allows us to
provision containers on the fly based on an image and it would
be hosted in the Cloud. So if I jump over
to repositories and look at our published
image here, you see that you have the
option of going to the tag, the specific version
that you want. And then you can see run instances are
deployed to web app. So we're going to do both. Let's start off
with Ron instance. Here. It's going to launch the creation blade for
the Container Instance. So I'm just going to call
this micro service dash two because that's the only
one that we put straight. Microservice dash two, we'll
leave everything as default, use the same resource group,
the appropriate location. We can specify the
resources that we want for this container and we can specify want a
public IP address. I'm going to say yes. And I'm going to leave it to
broadcast at a port 84 now. And let's just go
ahead and hit Create. And once that creation
process has been completed, we can jump over
to the resource. And from there we have our microservice application provisioned in
Container Instances. So here we can restart, stop, remove this container. And we will see here that
we have one container running in the container
instances service, and we have a public IP address. Now, you can get a
fully qualified name, but you would generally set that value when
you're going through either the porto
wizard setup through the normal steps where you
have to fill out everything, the basics and the
networking, top, etc, etc. Or if you use the command. So for now we took
the easy route and we can back pedal and look at
what the wizard looks like. But for now, I'm just
going to browse to the diploid API via this
a public IP address. So if I open up a
new tab and type in that public IP address
and just type in that weather forecast endpoint. Since that's the only
thing that's there. And let me correct my spelling and try again. There we go. So here we are hitting the diploid API inside
of that container. And that container is running on our Container Instance
service in the Cloud. Of course, if we add a fully
qualified domain name, that we would be able to
heat it via that address. Now just the buck chuck
a bit and jump over to the wizard for creating
a container instance. If you look at it, fill
out the basic information, we know all of that already. We do have several skew options, but then the availability
is relative to our region. So you can read up more on
that for your own reason. If you're not seeing
what you would like. We can choose our image source. We can have the Quickstart
images where we can just choose one
of these samples. Or we can choose
from our registry, which is basically what we did. So we could just look for the appropriate registry and
then the appropriate image, and then the appropriate
version that we desire. We can change the size
of the container. And then we can look
at other registries, whether they're public
or private registries and provide the
credentials accordingly. Now, of course, it's probably better to have everything in Azure and not have some things on Docker Hub
and some things on a zero. Of course, based on your knees, on your architecture
and your organization, you want to make the
best decisions possible. But seeing that this is a
zero developers course, we're going to skew
everything towards keeping everything in
Microsoft does Europe. Now for the networking portion, we can choose a public, private or no IP address. We can also choose that
domain label, that DNS level. Alright? So the DNS name label for the public IP address
will be a part of a fully qualified domain name or FQDN that can be used to
access the container. So that was the
missing piece with the other method that we used to create this ACI or
container for our image. If we jump over to advanced, we see that we have some amount of orchestration oxygen here. So we can set up our
restart policy where we say if maybe there's a
failure or something happens, do I want to restart
the container? I can say if something happens, restarts always means there
will be a periodic Restart. Never means it will
run until I stop it. And then of course, after
putting all of those, we can go ahead
and create notes. You can also specify your environment variables for this particular container
that you are spinning up. So that's whole
Container Instances work in our Azure
Container Instances. Now of course it did
see an option where we could have deployed
it to a web app. So when we come back,
we're going to look at a zero Web App Service
with containers.
10. Publish to Azure Container Instances: In this lesson, we're
going to explore the nuances of creating a containerized as
your Web App Service. We're not actually
going to create it. We're just going
to look at some of the different settings that
we need to be aware of. So when I click Create, we go to the basics. Of course we know we fill out the basic information
with our resource group. I'm going to give
this specific name. I'm just going to say
microservice dash two dash. Something unique that I'm sure nobody else on
the Internet has. And then how are we
going to publish? So before now we've
looked at code. What if I go to
Docker container? So if I click Docker container and notice it's
going to ask me what operating system so far
we've been using the notes, so no need to change that. I'm going to use the best
region based on my needs. And then I'm going to specify
a new service plants. So notice I cannot use the Code Service plan because the hosting model is
different when it's cold, then we can reuse the existing service
plan that we had from earlier in this course. Because I'm choosing
Docker container, however, I have to do a new one and I'm not one to go to
the premium pricing. I can still get
the free pricing, so I'll just choose that one. And then if we go over
to the darker tub, then we have several
options here. I can choose a single container
versus a Docker Compose. And notice that's
still in preview. So if I choose single
container that allows me to specify the image source, do I want to use it
from Docker Hub, some other private registry. So darker whole band, ECR or a zero
Container Registry. Those are not the
only two registries. And sometimes you might
end up provisioning your own registry in your
own environment, right? So those are all options. But if I choose my Azure
Container Registry, I can then go ahead
select the registry, select the image and its tag. I can set a startup
command if I wish. Jump over to networking. I'm pretty much everything
else remains the same. Whatever you're
already accustomed to are familiar with when it
comes to Azure web apps. Using the darker workload for the Azure Web App Service would be very similar
bar in the fact that no, we're using docker containers. And of course, when
it comes to updating the tags and continuous development and
continuous deployment, all of those are options that are available to us once
we provision this up. However, in the interests of time and cost,
I won't proceed. Of course you can go ahead
and proceed with a free tier. But do remember that the Container Registry and Container Instances are costing. So you can go ahead
and experiment and get whatever information
you need accordingly. I will, however stop here. So when we come back, we're going to take
a brief look at Kubernetes 0s, just some theory. Once again, this is not an in-depth course on
Docker and orchestration. There's a whole lot to learn. I just wanted to give
you an appreciation of how containers work hole. We can put our
images on Azure and spool up instances of applications
based on those images. So when we come back,
we're going to look at some theories surrounding
Kubernetes 0s and get an understanding of how it
works in this current context.
11. Azure Kubernetes Service Overview: All right guys,
so in this lesson we're just going to
run through some of the basics surrounding
Kubernetes is just some theory. We're not going to get
into Kubernetes is, is a whole different ball game. So we're not going to get into
any details and discourse, but I do want you to appreciate why you would usually hear of Kubernetes is
whenever you hear of darker and containerization. Kubernetes is offers a
reliable scheduling and orchestration mechanism for fault tolerant
application workloads. That's a whole bunch of
big words to say that it offers us the ability to manage whole
containers are provision, the provision restarted
based on different metrics. So we can actually
use Kubernetes is to set up different rules to manage or orchestrate
or containers. Once again, for Cloud native or microservices
based applications, this is very important
because there are several moving parts
that need to be healthy, probably need to
scale individually. So using Kubernetes is we
can orchestrate all of that. We can tell one of
the services that we need three containers
during the day. Another one needs to another
one never needs to scale. All of those things are
possible with Kubernetes. To do this, we use a
declarative approach, and it allows us to use this declarative approach
to handle deployments. And it is backed by a robust set of APIs for our
management needs. It provides container management for organizing,
adding, removing, or updating it up to several
containers at that time. Now when we use
Kubernetes is we can abstract useful tasks
like self-healing, scaling, network
management, storage, container updates, and
secret management. Now, the reason this list is significant is we saw some of these challenges
when we were looking through containerizing are up. Alright. We saw that to be
very mindful of which IP address or rather which port number got assigned
to each container. We had to look at how well, we didn't do it in
these exercises bolted on microservices
architecture. Sometimes services needs
to talk to each other, so we definitely
need to know how they're going to network
with each other. How are we going to handle
storage or are we going to update the container when that container
image gets updated? How do we restart the container to handle that new
version of the image? So feeling, if something
feels, do we restarted, we saw that we can
do some of that using the Container
Instances online. And of course, if we sit
down and monetary ourselves, we can probably do it. But why do we tell
ourselves when we can use Kubernetes is the automated? Now, this is a comment once again from
Microsoft documentation. It's a quick overview of how our Kubernetes is
what we'll call cluster. You may also see the experts on key it cluster looks like. So from here we have
the control plane. This is where all of our
orchestration logic would sit. And then we have cubelets
or key instances. And each one of these
has the container. It's seen as a node. And it has runtime for
the container as well as the proxy to communicate
with the control plane. And then as many containers
as there are Kubernetes is once again will orchestrate
and manage all of them. Now, of course, if we have Kubernetes is we're
going to have an Azure Kubernetes
Service or AKS for short. So this offers us a
quick way to develop and deploy or containerized
apps in a zero. So we saw how quick and easy
it was with the ACR and ACI. Well, it's even easier and more robust if we put AKS in the mix. And it gives us
the full power of Kubernetes
orchestration backed by Microsoft as your
infrastructure. It's a pay-as-you-go service. And once again, scalability
and all of those things are in Twine in this
service is fully managed, so we don't have to
worry about any of the underlying
software and hardware. And it offers more
orchestration and management features than ECI
or Container Instances does. Let us think of it like
a management service or management service extension
for our ACI service. So once again, all
of these services can come together to
help us to deliver a containerized
application that is backed by proper orchestration
and scalability. So now that we have
an appreciation of Kubernetes is at least from
a theoretical standpoint. Let's jump back
over to our a zero and clean up our resources and look at how we can free up some space and save some money.
12. Resource Clean-up: So as we near the
end of this section, we want to make sure
that we clean up our resources and don't spend more money than we need to on our Azure subscription. Even if you weren't able to follow along with some
of the pea plants, at least you understand
the concepts and we'll just look at how we
can remove the resources. So the easiest way to
remove our resources, to go to the resource, of course, and click Delete. Now, if you have other resources in the
same resource group, you can always just
delete the resource group and that would destroy all of
the resources accordingly. Now I have some important
things in this resource group, so I'm not willing
to take that route. Instead, I'm going to delete
the resources individually. So to delete our
Container Registry, I'll just hit Delete. Are you sure you want to delete? Okay, and then that will trigger
that deletion operation. Similarly, I'm
going to go over to the Container Instance for
the microservice hyphen tool. And I'm going to go ahead
and delete that as well. I'm also going to revisit an earlier operation where
we deleted the images. Images do take up space. You can see if you type in
the command docker images, all the images that you have on your computer and their
respective sizes per image. So if you don't need the image, you can just remove it. Alright? So here you see that the SQL
Server image is 1.33 gb. And then some of the images from the apps that
we were working on. All of them have
a combined total of probability that's
over a gigabyte. So to remove an image, you can simply double-click
on that image ID and copy it. And then in the command, you type darker RMI, short for remove
image, piece that ID, press Enter, and then it
will go ahead and remove it. So you may get an earth once. Let's see what this response is. A response from D11 conflict
on able to delete this, it must be forced as
it is being used. So if you have to force it, then let's retype that command. Docker are a mad dash F, and then that will
force the deletion. Alright, so that's another way that you can clean up some of the resources that you
would have probably shunned during this section. So let's wrap up this
section of the course.
13. Conclusion: So in this section we learned about containerization
and darker. And we realized that Docker
is a technology that has, are a company that provides technology and has
set a standard for containerization in
the entire IT industry. While exploring
containerization and darker, we looked at how we can
containerize a.net Core project. We looked at using
the Docker file. We looked at provisioning
additional supporting resources and even some amount of oral histories on
using Docker compose. We also looked at
how we can deploy our apps to Azure
Container Instances, and how we can upload our image, our container images
to a registry. In this case, we focused on
Azure Container Registry. But if you're using
Docker Desktop, it is very easy to push the Docker Hub if you want to
use that as your registry. We also did a quick review of Kubernetes and
Container Orchestration and what all of that means. So thanks for sticking
through this section with me. I'll see you in the next module.