Transcripts
1. DevOps Introduction: Welcome to this Skillshare
class on DevOps for beginners. If you're curious
about what DevOps is, the tools and
technologies it involves, and how it's shaping
the tech industry, you've landed in
the perfect place. In this class, we will cover
the fundamentals of DevOps, what it is and why it matters, key learning objectives like understanding automation, CSED, and infrastructure as code, a clear roadmap
to guide you step by step on your DevOps
learning journey, an introduction to tools and technologies that power DevOps, such as Jenkins, Doc
or Kubints and Moore. Giving you a solid understanding on what's used in the industry. This class is designed
for absolute beginners, whether you're a student,
a developer looking to grow or completely
new to tech. This course will give you
by the end of this class, you will understand the
essential concepts of DevOps, have insights into the
tools and tech and be equipped with the
roadmap to continue learning and building
real world skills and be equipped with the role. So let's dive in and kickstart
your DevOps journey.
2. 0101 The Inception of DevOps Moment: DevOps Dev developers
ops for IT operations. Developers tend to have the following set of
responsibilities. They code the
application depending on the requirements or the user
stories for that sprint. They're going to then
build the project to create an artifact
or an executable, which can then be deployed
onto the server or on the development enrollment
to test their changes. They're also
responsible for taking care of testing the
entire application, making sure that
all the features are working as expected, and that none of
the features were broken because of the
recently introduced changes. If you have a separate
quality assurance team, they're going to take care of thoroughly testing
the application. On the other hand, IT Ops team would have a different
set of responsibilities. They are responsible
for deploying the application
on the staging or the production environment
so that the application can now be used by the
customer or the end user. They're also responsible
for maintaining the infrastructure required
to deploy the application. They're going to
make sure that they are adequate servers and resources to help run the
application seamlessly. They're also constantly monitor the application for
applications performance, health, resource utilization, warnings, errors, et cetera. No deploying the application
is not an easy task. It's not that you
copy the artifact onto your server and
it magically works. You have to take care
of the dependencies, libraries, configurations that will help run
the application. Oftentimes Ops team are not
familiar with these steps, and so they take help
from developers. So developers would provide
a deployment guide with clear step by step instructions on how to deploy
the application. Ostam would follow those steps, deploy the application,
and the application would be up and running
all well and good. Until one night, the application crashes or some of the
features would break. That leads to the
infamous blame game where the OpsTeam would
say that they have followed all the instructions
in the deployment guide, and if application is
still not working, then it is up to developers. On the other hand, developers are going to throw
back the blame to OpsTeam saying that they have followed the exact steps
in the deployment guide, and it worked for them on
their development enrollment. So the blame would go back and forth, waste a lot of time. But the truth is the
mistake could be anybody's. It could be the case
that developers missed a step in the
deployment guide, or it could be the case that the Ops team haven't followed
the instructions well, or maybe it has something to do with the resource allocation, or maybe the OpsTeam
might have installed a different version of
software which is unsupported, or actually, it can also
be a bug in the code. Maybe developers
have done a mistake. For example, developers might
have written a bad code, which constantly consumes
the memory to a point that the system would go out of memory and the
application crashes. That may not happen on development enrollment
because they do not run the application long enough for it to crash
because they keep redeploying the application as in when they introduce changes. However, this might happen
on production enrollment, where the application would be constantly running for
a long period of time. Being a developer
myself in the past, it's actually very frustrating
to hear something like that from Ops team because
generally speaking, developers are already busy implementing the new features. They have new commitments, they're answerable
to their boss, and they're not
willing to dig through the old logs which might impact their current
project commitments. So they try to avoid
these problems by just simply blaming
the other party. And same is done by
the Ops team as well. But this blame game
wastes a lot of time, and a lot of issues would be left unresolved for a
long period of time. And so even the release
would get delayed for weeks or even
sometimes for months, which will lead to
customer frustration, and you can guess the
consequences of it. It means loss of revenue and
reputation for the company. This is the trigger and the starting point for
the Da WOPs moment.
3. 0102 What is DevOps: So what is DevOps? Anything we can do to improve
the team's efficiency, reduce the odds of error
during production, improve the communication
between teams and encourage communication between
cross functional teams, reduce the release
time, or improve the overall process
is what is DevOps. And all this can be accomplished with combination of mindset, cultural philosophies
and practices, tools, and automation. And if you ask me, DevOps
has evolved so much nowadays that it has least to do with mindset and
cultural philosophies, but more to do with
tools and automation. What was initially
started off as a mindset shift has now evolved into
tools and automation, which we're going to
talk about right next.
4. 0103 Mindset Philosophies and Practices: Let's talk about the
mindset aspect of DevOps. Before DevOps, there used to be a psychological barrier between developers and IT operations, wherein developers are more focused on writing the code and are not bothered
about the enrollment on which the application
would be running. Ops team on the other hand, are never bothered to understand how the application works, its dependencies
or anything that is related to developers job. With DeWops, this
barrier would be broken, and both the teams
would come into common understanding that
they actually work together. They now start to believe that
they're actually one team. If there is anybody to be
blamed in case of an issue, then there is no one to be
blamed except themselves. They're going to
resolve the issues together as one entity. In fact, in certain
organizations, there's no separate developer
and operations team. They might call
themselves as developers, but they actually do the job of both developers as
well as IT operations. Now, this is the mindset
aspect of DevOps. Now, let's talk about some of the cultural philosophies
and practices in DevOps. DevOps encourages to have open communication and
collaboration between teams, knowledge sharing between
the team members. So developers might have a
knowledge sharing session with Ops team and vice versa, OpsTem might have a knowledge sharing
session with Dev team. Regular meetings with cross functional teams
like testing team, security team, et cetera. In fact, before DevOps, Optim on never bothered to attend any of the
meetings of developers. But now they're
actually involved in the entire
product life cycle, right from requirement
gathering. In fact, they also attend the meeting for
requirement discussions. Broadening their skill set. So developers might gain some of the skills of operations
and vice versa. Oops team gain some of
the skills of developers. Use of collaborative tools, which we're going to talk
about in just a bit. One thing I should
mention is that DevOps is not practiced the same way
in every organization. Depending on the scope of the project and kind of
technologies they're using, DevOp practices and
tools might differ.
5. 0104 DevOps Environments and Tools: Developers would be
needing a Dev enrollment to test their changes or for
their day to day activities. Similarly, testing
team would be needing a testing enrollment to
thoroughly test the application. And likewise, IT
OpsTam would also need staging or the
production enrollment to deploy the application. In order to get all
these enrollments, we would probably
be needing servers. Earlier, we used
to procure servers and used to maintain
them on our own. But nowadays, we use
Cloud service providers like AWS Azure GCP. Provide infrastructure
as a service, meaning that instead of we
maintaining the servers, they're going to do that for us, and they're going to
provide resources as services like RAM, CPU, hard disk, et cetera. The advantage of using a
cloud service provider is that it's more scalable,
it's more secure, it's also more reliable and
cost effective compared to purchasing physical servers and maintaining them on our own. Creating an infrastructure
is not just about launching bunch
of virtual machines. We also need to take care of setting up the network
configurations, attaching storage
volumes, and even configuring other
necessary services like databases, et cetera. So creating the infrastructure consistently across
all the teams without making a mistake is actually a very challenging
and time consuming task. To solve this problem,
we have terraform. TerraForm offers
infrastructure as code, which means it will allow you to create infrastructure
by writing code. For example, you can write
code saying that you need N number of servers
with so and so RAM and so and so hard disk, and it's going to do just that. It allows us to create similar
and consistent enrollments across all the teams. You can even reproduce
the same enrollment by running the script again. Once you have the
infrastructure or the servers, the next thing we want to do is to install operating system. Now, all these cloud service
providers provide us a ready made VM image that
comes with operating system. So while launching the watcher
machines using Terraform, we can choose that image so
that the infrastructure or the servers would be created along with the operating system. Or alternatively, you can also use a configuration
management tool like NZB in order to install operating
system on the servers. We're going to talk about
ansib in just a bit. Now we have the infrastructure
and the operating system. Technically, we can now deploy the application and
get it up and running. As a DO team member, you can easily do that because
you're a technical person. You know all the
libraries dependencies, configurations required to
help run the application. However, if you
talk about some of the non technical
individuals like from testing team or
from IT Ops team, they don't know how to
deploy the application, and so they rely on developers
to get the instructions. And we've already talked about the consequences
of this approach. If the DO team provides
a deployment guide, then it can happen
that there are not enough instructions on
the deployment guide. Or it might happen that the testing team or the
Ops team haven't implemented those instructions exactly how they're
supposed to be. Instructions exactly how
they're supposed to be. And that would lead
to application not functioning as expected, and it eventually leads to the blame game delay of
release, so on and so forth. Hence, to solve this problem, we now have Docker. So this time, instead of
deploying the application directly on operating system by installing software,
configurations, et cetera, or providing the deployment guide
to the other teams, we're going to now create
so called A Docker image. A Dockerimage is essentially
combination of the code, runtime libraries,
and configurations. It's actually a self
contained package that includes all the
necessary dependencies, libraries, and files required
to run the application. And using these Dockerimages, we can spin up containers
in different environments. Now in order to create
containers out of these images, we need a platform
that supports it, and that's where Docker platform
would come into picture. We're going to install
Docker platform on top of operating system, and now we would be able to create containers or
of the Dockery images. A Docker container
is essentially a runtime instance
of a Docker image. If you're familiar with
VMware technology, then Docker Image is
equivalent of VM Snapshot, and Docker container
is equivalent of a running version
of me Snapshot. But unlike a virtual machine, docker images are lightweight. It doesn't come along
with an operating system. Instead, it would use the
host operating system. Anyway, that's going to be
a topic of another lecture. Now, in general, we
may not just be having one Dockery image that would
have the entire application. If you're following Microsovice
architecture wherein your application would be split up into multiple
smaller modules, then you might end up having
hundreds of dockery images. Mintaining those dockery images manually is a difficult task, and that's why we have
artifact repository solutions like Nexus or Docker Hub. Nexus acts as a
centralized repository where development
teams can store, share or retrieve the software artifacts
like Dockery Images. Basically enables easy sharing, collaboration between teams, and even version
control of artifacts within the development teams
or across the organization. And various teams
in the organization are going to pick the darker
images from these platforms, and they're going to spin up containers out of those images. So you might end up having
hundreds of containers running but when you have
so many containers running, it becomes really difficult
to manage them manually, and that's where Kubinidis
comes into picture. Basically, it
provides a platform for container orchestration, making it very easy
to deploy scale or manage the Docker containers
from a single dashboard. Without Kubintis, it's really impossible to manage
so many containers. So so far, we have got a consistent enrollment
across all the teams. We've got the infrastructure, operating system, and we've even got the application
up and running. Now, what if I want to
install a software or update a software or change a configuration in
these enrollments? Well, we cannot log into every
system and do it manually. It's very time
consuming and error prone and would be
very inconsistent. So to address this very problem, we have tools like
Azb Chef Papet. With AZB you can automate various tasks like
deploying the application, installing a software, changing a configuration,
et cetera. While Docker allows
you to create an isolated runtime enrollment with application
code, dependencies, et cetera, NZB, on
the other hand, works on top of existing system to perform various tasks
on the remote system. We can use NZB to install the operating system and
even the Docker platform. These are some of
the tools used in DevOps to create the enrollment. We've got a bunch of
other tools as well, and we're going to talk
about them when we talk about the phases of DevOps, which is coming right next.
6. 0105 DevOps Phases and their Tools: Let's talk about various
phases in DevOps and also understand the different kinds of tools used in each
one of these phases. First, we have the
planning phase, and this basically involves defining the project
requirements, setting up the goals, giving
the individual capacity, and deciding on who will
do what, et cetera. And here we might
use these tools. We need a project
tracking tool like Jira, for instance, which is one of the most popular tools
to track the project. Here, we might track user
stories, bugfxes, et cetera. And then we have collaborative
tools like Slack, Confluence, Google
Docs, Microsoft Teams, et cetera for messaging between the employees or to
manage requirements, share knowledge, collaborate
with cross teams, et cetera. Confluence is like a Wikipedia
for an organization. Next, we have the coding phase, and this is where obviously developers would write
and review the code. They'll ensure its quality and adherence to the
coding standards. And here, they're going to use version control system like Git and code repositories
like GitHub, Bitbucket, GitLab, et cetera. And they typically
tend to follow test driven
development by writing J units or performing any static code analysis
using tools like Sonar cube. Static code analysis would mean that we would analyze the
source code for quality, reliability, and
security without having to execute the code. And obviously not
to mention that in order for developers to work on their day to day activities, they're going to be
needing enrollment, and we can get that enrollment using all the technologies
we've talked earlier. Next, we have the build phase, and this is where we might
use some CICD tools. We're going to talk about CICD in more detail in
upcoming lecture. But one of the popular
tool for CICD is Jenkins. We also have other
tools like Circle CI or Gitlab CICD that
does exact same job. As part of build phase, Jenkins would
actually use some of the additional tools
like Maven Gradle to build a project so that we get an artifact or a
deployable artifact. Jenkins will also create a Docker image using
the Docker CLI. With that, obviously, we need a place to store the images, and that's where
we have Nexus and Docker Hub to host and
maintain the artifacts. Next we have the testing phase. This is where we would do the thorough testing
of the application. So here we do
regression testing, making sure new features didn't break any of
the existing ones. We do acceptance testing. We also do security and
vulnerability analysis. We do performance testing, configuration
testing, et cetera. Selenium is one of
the popular tools to automate the process of
testing the application. Apache Jometer is one of the popular tools to perform
performance testing. And once again, in
order to test things, we need an environment
for testing, and that's where once
again, we're going to see all these technologies
coming into picture. We've already talked about
all of these earlier. Next, we have the deploy phase, which is where
we're going to use Jenkins to automate the
process of deploying the artifact or specifically
the Docker image onto the production environment. So once everything
is tested and making sure that everything is
working as expected, the Jenkins will automatically
pick the artifact from artifactory postery like
Nexus or Docker hub, and it's going to deploy it onto the staging or the
production environment. Once again, we're going to see all the technologies that will help us create
the enrollment. Next, we have the operate phase. So once the software
is deployed, the operations team would do their job to manage
the infrastructure, troubleshoot any issues,
monitor the application. Their main focus is to maintain a stable and
reliable enrollment, making sure of
high availability, performance, and security. And next comes the
monitoring phase, which involves collecting and analyzing the metrics, logs, tracking the applications
performance, for health and resource
utilization, et cetera. Basically, operations team
will monitor for everything, and if they find any issues, they will escalate it
to the relevant teams. Nagios and prometios are some of the popular
tools for monitoring, and Dynatrace and
app dynamics are some of the tools for
performance monitoring. Next comes the learn phase. And here we basically
collect feedback from users, stakeholders and
monitoring tools to suggest for any
improvements, bug fixes, or even introduce
new features with the goal to refine and
enhance the software, and the entire cycle now repeats right from
the planning phase. It's a continuous process and new releases would keep
happening forever, at least until the point
the project gets abandoned. And that is the reason why
DevOps logo looks like an infinity symbol because this process would keep
on happening forever, and software will evolve and
improve every single time. One important point
I want to make here, which is also the thing that
I've mentioned earlier is that DevOps is not practiced the same way
in every organization. The tools and
methodologies would differ depending on the project. But what we've talked so
far are the popular ones. If you're not sure
what to learn, then learn these popular ones,
the ones I've mentioned. Next, we're going to talk about the most emphasized
word in DevOps. Continuous. I'll see you next.
7. 0106 Continuous Integration Continuous Delivery: Traditionally, developers
used to work on the feature or Bfix and
then they're going to push those changes onto centralized code
repository like GitHub, bitbucket, GitLab. I'm assuming that you're not
familiar with these tools, so I'm not going to use any of the terminologies
associated with them. But essentially, developers would contribute their changes, and only at the end of the development life cycle would all those code
changes be merged? In other words, all these
code changes would be merged or integrated and make them
part of the main code base. Once it's done, assuming
there are no conflicts, we would go to the next phase where the testing team
would test all the changes, do thorough testing
of the application on the testing enrollment, and assuming that
everything went well and that the testing team
haven't found any issues, which is very unlikely,
we're going to move on to deploy the application onto the staging or the
production enrollment. However, this is the
best case scenario, but in all likelihood, you're going to see the
following issues. You're going to see
integration issues because when you merge so many changes
together in one go, the chances are
that you're going to come across with conflicts, meaning more than one developer might have worked on
same piece of code, and those conflicts needs to be resolved before proceeding. Or it might happen
that changes done by one developer might have negative impact on changes
done by another developer. It's also really hard
to trace the issues because when you integrate
so many changes together, in case if you find an issue, it's really hard to know
which particular change has actually caused the issue. So resolving the
conflicts and identifying the root cause of issues
becomes more challenging, which might lead to
potentially time consuming and error prone
integration efforts. W, if testing team finds
any critical issue, then it might even result
in delay of release, and even the feedback
cycle is longer, developers will not get
to know of conflicts or any bugs until the end phase
of the development cycle, and this might lead to
longer resolution times and hinder the ability to address
issue in timely manner. Develops, though, we're going to follow a different approach, and here is how it goes. So after the planning phase, developers would start coding. Every developer would
contribute their changes onto the centralized reposory
then we have Jenkins, which is a CSCDTol would constantly monitor for new
comets on the reposory. Upon identifying a new commit, Jenkins will initiate
a build process to create the artifacts
or the Docker image, which would then
automatically be deployed onto the
testing environment. Jenkins will also trigger auto merit tests to thoroughly test the
application and the changes. And once it is
done, Jenkins will notify the reviewers to
review the code changes. The reviewers will
examine the code changes, provide necessary feedback,
suggest improvements, et cetera, and developers would address the feedback received during the code review. They make all the
necessary modifications, add clarifications, or discuss alternative
approaches, and this iterative
process continues until the code changes meet the
required quality standards. Once everything
is well and good, once the reviewers
approve the code, we're going to go to
the next phase where we're going to merge all those
changes or in other words, we're going to integrate
all those changes onto the main code base. And before actually
merging the changes, we might have a task in
Jenkins to perform any of the additional tests
or validations before merging the changes. Upon merging the changes or integrating the changes
onto the main code base, we might initiate the process of deploying these changes on the production or the
staging environment. And once again, Jenkins is
going to do the job for us. It's going to build the project, deploy the necessary artifacts on the staging or the
production environment, and get the application
up and running. Now the process of contributing the code, running
automated tests, and eventually
merging the changes or integrating the changes onto the main code base is what we call continuous
integration. Or more specifically,
we also have continuous development
where the developers would continuously
make improvements, introduce new changes, and
the process of continuously testing the changes with automated tests is
continuous testing. What we used to do manually earlier are now bunch
of automated tests. And the process of delivering
the changes onto staging or the production environment is what is called
continuous delivery. We also have often confused term called continuous
deployment. The difference between
continuous delivery and continuous deployment
is very simple. When we manually interfere to allow Jenkins to
pick the code from main code base and eventually deploy the application on
the production environment, that's called
continuous delivery. If we automate this process, meaning that right after we integrate the changes
to the main code base, and Jenkins would
automatically pick up the code and deploy it onto
the production environment, that's called
continuous deployment. And once the application
is deployed, the Ops team would do
continuous monitoring, and the whole process repeats as part of the
DevOps life cycle. So unlike traditional approach, where we improve the
software in one large batch, in case of DevOps, updates are made
continuously piece by piece, enabling software code
to be delivered to customers as soon as it
is completed and tested. Obviously, since
we're not integrating huge chunk of changes in one go, we're not going to have
as many conflicts, and since the fact
that tests are performed almost immediately
after making a commet, developers would get
instant feedback if there are any
issues with the code, and so they have ample
amount of time to address the issue without reaching the end of the
development cycle. So every time somebody
makes a comet, we're going to repeat
the entire process again because of the fact that everything is
pretty much automated, this is going to be very
quick Jenkins is at the center of the entire process and is connecting all
the dots together. Next, we're going to
have a quick summary on what we have
accomplished with D Wops.
8. 0107 DevOps Advantages: Let's talk about some of
the advantages of DeWAps. Improved collaboration
and improved culture, obviously with
shifting mindset and incorporating certain cultural philosophies and practices, along with collaborative tools, we have significantly
improved the collaboration between cross functional teams
and the overall culture. Faster innovation and issue
resolution with a lot of emphasis on automation and with continuous integration
and continuous delivery, we have significantly
reduced the time it takes to
release the software, and due to these
faster iterations, we can innovate faster and also resolve issues
in timely manner. More stable operating
enrollments. With the use of various
tools and technologies, we were able to create stable enrollments across
all the teams. So if the application
works in one enrollment, chances are that, it
will also work in other environment without
causing any trouble. Let's cope for
issues and downtime. By consistently performing frequent and regular
auto merit tests and also being able to maintain stable and reliable
environments, we can reduce odds
of occurrence of issues or downtime
during production. Better operational
efficiency, by using various tools like Ansibl for
configuration management, and by using
monitoring tools like Nagios and with better collaboration
with other team members, we have improved the overall
operational efficiency as well. Cost effective. Obviously with a lot of
emphasis on automation, what used to be done
manually or now automated and also by using
Cloud service providers, we're going to
significantly cut the cost. And that's why if
you're coming from testing or IT Ops background, you need to seriously consider upgrading your skills to DevOps. Customer satisfaction. Obviously, when you follow
DevOps methodologies, given all its
benefits, ultimately, it's going to result in
better customer satisfaction because they don't have to
wait for releases as long, and obviously that translates to better revenue and
better reputation. Also will help you stay ahead of your competitors in the market. I hope you have understood
the overall picture of DevOps. I'll see you next.