Transcripts
1. Welcome: Welcome to my course last week. Such one-on-one, a beginner's
guide to Elastic Stack. This is a hands-on
course with focus on deployment and management
of Elastic Stack. My name is Ricardo and I'm a
Senior Solutions Architect, over 15 years of
experience in IT. And I'm currently working at a multinational
company that I have designed and deployed
Elastic Stack in a production environment. I have taught more than
3400 childrens on Udemy and have an average rating of
4.4 across all my courses. This is a hands-on course, will focus on
practical learning. In this course, you will deploy Elastic Stack and then add
some data to your cluster. Two different methods such as log stretch and
Kibana integrations. And also you will transform that data in order to extract
some meaning out of it. Finally, you will
deploy elastic agent who easily extract
matrix and logged data from your
application servers and store and analyze that
data in Elasticsearch. So please join me on this
journey to learn Elastic Stack.
2. Introduction to elasticsearch 101: Hey guys, welcome to my beginner
scores on Elastic Stack. I'm delighted that you
gave me the opportunity to guide you on the path to becoming an elastic
search professional. This course adopts
a hands-on approach towards learning and
is focused towards DevOps professionals
who will be responsible to deploy and manage
Elasticsearch clusters. It support staff that will be responsible for supporting
Elastic Stack deployments. And it might benefit software
developers if they need an understanding of how Elasticsearch
clusters are diploid, please note that this
is a beginner scores, so I will avoid deep diving into any of the topics
In this course. We will start with some basics, such as what are the
different types of data that you will encounter in an
enterprise environment. And then we will have a look
at the difference between the relational and
non-relational databases before moving on to have an
understanding of JSON and rest APIs after you have
the prerequisite knowledge, which is essential if you want
to work on Elastic Search, we will look at
Elastic Search and Elastic Stack from
a bird's-eye view, and then it is time to
get your hands dirty. First, we will set up our lab environment and
then we will look at how you can provision elastic search in
different ways. And we'll follow this
up by adding some data into our cluster through
Kibana integrations. Then I will help you create your first Kibana visualization. And then we will set
up some dashboards. After that, I will
help you organize your dashboards into different categories
such as marketing, operations, etc, by
using Kibana spaces. Next, I will give
you an overview of user management
in Elastic Search. And we will look
at Elastic Search, built-in users, built-in roles, and how you can use users
and roles who create role-based access controls for your Elasticsearch Cluster. Next, we will look at Log Stash, which is an essential
component of Elastic Stack if you want
to transform your data. And then we will ingest some CSV data by using
log slash pipelines. In the next section,
I will help you understand beats agents and various use cases that you can solve by using different beats. It says you need
to use metric Btu ingest metric data into
your Elasticsearch cluster. And you need to use
5-bit to ingest file-based log data into
your Elasticsearch cluster. After this, we will deploy a multimode
Elasticsearch cluster. And I will give
you an overview of various nodes in the
Elastic Search environment. In the final section, we will look at what changed in Elasticsearch version eight and then deploy a version eight
cluster on a single node. Finally, it will
install elastic agent, which is a single agent
needed to ship a new type of logs from your application
servers to elastic search. So let's get started.
3. Overview of data: Hey guys, if you're
working on Elastic Search, yield predominantly
we working on data. So it is important to have an understanding of different
types of data available. First, let's talk
about structured data. Structured data is highly organized and
properly formatted. This is the type of data that can be stored
in a spreadsheet. Relational database such as SQL. An example of structured data
will be sales figures per quarter for a company or employee information
such as employee ID, employee name, and salary. This type of data is easier
to store and search on, since as the name suggests, it has a structure to it. Next, let's have a look
at unstructured data. Unstructured data is
harder to search. It is data that is not
organized or formatted. An example of this type
of data is text files, video or audio files, social media likes and comments. This data makes up almost 80% of the data
folder in the real world. And as the name suggests,
it is unstructured. So it is more complex to store this type of
data and search on. Now, next, let's have a look
at semi-structured data. Semi-structured data is a mixed between structured and
unstructured data. It is data that cannot be organized in a
relational database, but still has a loose
structure to it. For example, data that can
be organized around a topic. And a good example of this
type of data is email data. Since emails can be
organized through fields such as from subject,
message, body, etc. Next, numerical data is basically data expressed in
numbers rather than texts, for example, sensors,
temperature, age, etc. Another type of data
that you might come across is geospatial data. Geospatial data is the data that pertains to an
object on Earth. It contains things
such as location, which is longitude and latitude, size and shape of an object. With this, we have come to
the end of this lecture, an overview of
different types of data that you will
see in the field when you work on any data
processing transformation or analytics tool. I hope you liked it and I
will see you in the next one.
4. Difference between relational and non-relational databases: Hey guys, there are different types of databases
that you will come across in each offers
its own unique benefits. These can be primarily
categorized into Sequel, relational databases and NoSQL or non-relational
databases. First off, let's have a look
at a relational database. Relational database is a type of database that stores
data in tables. In a relational database, each row in the table is a record with a unique
ID called the key. The columns of the table
hold attributes of the data and each record usually has
a value for each attribute, making it easy to establish relationships
among data points, and hence the name relational database management
system or RDBMS. This type of database
can only work on structured data and structured programming
languages such as SQL, can be effectively
used to insert, search, update, or
delete database records. Some common examples
of RDBMS are Oracle, MySQL, Microsoft SQL
Server, and Postgres SQL. Now, here's an example of what a relational
database looks like. Let's say you have a table
that source uses details. And let's say this
table is called users. Now this user's table will
have details such as user ID, first name of the user, last name of the
user, and their age. Now let's also see that there is another table that stores
users educational degree. And it can be related to the users table by using
the user ID column. As you can see,
this type of data is structured and it's
stored in tables, and it can be related to
each other using columns, the user ID column
in this example. Now, let's talk about
non-relational databases. Non-relational database
is a database that works on semi-structured
or unstructured data. It is also called a
NoSQL database and has a dynamic schema for
unstructured data storage. This data can be
stored in many ways, which means it can be
document-oriented, column-oriented, graph-based,
or a key-value store. This flexibility means
that the database can be created without having a
defined structure first. Now, if you compare this
with relational databases, you always need to
predefine the structure and adhere to that structure throughout the life
of the database. Nosql databases store data in documents which consists
of key-value pairs. The syntax varies from
database to database, and you can add
fields as you go. Most equal databases provide flexible data model with
the ability to easily store and combine
data of any structure without the need to
modify the schema. And hence, it is suitable for big data and real-time Web Apps. Elastic Search is a type of
NoSQL database that stores data as a JSON document attached a link from
elastic search blog. If you want to read
further on this, now let's have a look at an
example of a NoSQL database. Here on the screen, I have a NoSQL database that
consists of two documents. As you can see, the structure
of both documents differ. For example, document number two has a key-value
pair for education, which is not present in
document number one. Also, if you look at hobbies, There's an extra hobby of
swimming for our user. Now this flexibility allows NoSQL databases to be used
for big data applications. Now there's another difference between NoSQL and SQL databases, which is how they scale. Relational databases
scale vertically and NoSQL or non-relational
databases scale horizontally. What that means is when you have to increase the capacity
of your database, the only way to do it in a relational database model
is to have a bigger machine. Now, what this means is downtime to increase
the capacity and maintenance task away
to avoid having a downtime later is to
pre-provision capacity. However, the flip
side of that is that all that capacity would essentially be wasted
until it is needed. Non-relational or
NoSQL databases solve this problem by
horizontally scaling. What that means is that
whenever you need capacity, you can add more capacity by just adding more
machines to your cluster. This way, you do not
have to pre-provision any capacity and you do
not need downtime as well. With this, we've come to
the end of this lecture. Thank you for watching, and I will see you in the next one. Bye.
5. Overview of JSON: Hey guys, Elasticsearch stores
data as JSON documents. So it would be a good idea
to get an overview of JSON. Json or JavaScript
Object Notation is a text-based data
exchange format. And the keyword here
is data exchange. Since JSON is primarily
used to share data, JSON is built on two structures. First, an object, which is an unordered collection of name value or a key-value pair. Second, an ordered
list of values, also called an array. Now let's have a look at
an example of JSON data. Here we have a JSON object. An object, as I said earlier, is an unordered set of name
value or key-value pairs. It starts with a
left curly brace and ends with the
right curly brace, each name or key, and its value is separated by a colon and name-value pairs. For example, name John and age 29 is separated by a comma. Our example also contains
an array which is contact arrays start with a left square bracket and end with a right square bracket. Now, JSON objects can have seven different
types of values. A string is a sequence of 0 or more characters
wrapped in double-quotes. For example, My name is John, here is a string, a number, for example, 1234 or JSON key can also have another JSON
object as its value. It can also be an array, and it can also be Boolean, which is true or false. And lastly, adjacent
value can also be null. If you want to
read more on JSON, I've included some links
with the description of this lecture so that you can
get deeper understanding. Since with Elastic Search it
will primarily be working with JSON documents with this to accompany the
end of this lecture. Thank you for
watching. I will see you in the next one. Bye.
6. Overview of Rest API: Hey guys, in this lecture, Let's have a look
at RESTful APIs, which is a popular method to communicate between
computer applications. Api is an acronym for Application Programming
Interface Editor is nothing but a set of definitions
and protocols for building and integrating
application software. For example, you might
want to use an API where a user can supply
a student ID and your API will respond with that student's exam
reasoners benefit of using APIs is that your application software
has not need to know how the client application
software is implemented. Rest is an acronym for
representational state transfer. What that means is
that it transfers a representation of the
state of the resource. And that representation
is nothing a piece of data that are requested
needs from your application. Rest works on
client-server model. What that means is that the solar resource holds
that piece of data. A client requests that piece
of data by using crud, create, read, update,
or delete operation. And to make that request it
uses one-off HTTP methods. Once that request has
been made by the client, the server response with that
piece of data that it is holding on a representation of that piece of data
that the server has. Now, let's have a look at
some of the STDP methods. Post allows you to create a new resource on
the server port, allows you to update an existing
resource on the server, get allows you to retrieve an existing resource and delete, allows you to delete
an existing resource. These are some of the
examples of the methods that you can implement
using rest APIs. Another thing to note is that rest API requests are stateless. What that means is that all
client requests are treated as new requests and no session history is
stored on the server. Now, before we move on, let's have a look at an
example here on the screen. I have an example of
an API call that is used to check health of
an Elasticsearch cluster. First we have the method
which could be any of GET, PUT, post, or delete. In our case, we are
using GET and this test the server that we want to
retrieve some information. After that, we have
specified the endpoint on the server which we want to
send a request to add in. It is http localhost
on port 9000. This is important because we might have
multiple servers and our environment and each server might be running
multiple services. For example, the same server can also be running a web
server on port 80, but we want to connect to
a service that is being run on port 9200 on local host. After we've specified endpoint, we need to specify the resource on that
endpoint that we want to connect to in our example that
resources cluster health. Now, each endpoint
might be running multiple resources in our case because we wanted to get
the health of the cluster. We are using cluster
health endpoints to connect to and
get the status. And finally, we can
specify a parameter. For example, in this sample, we want the server to respond
with the information in a human-readable
format or pretty it up for us after we've
submitted this request, the server then responds
with a response. In this example, the
summer has responded with the health of
cluster in JSON format. You can also get this response
in multiple other formats, for example, HTML, XML, or plain texts, etc. Now, there are multiple
tools available for you to get your hands
dirty on API calls. You can use specialized tools such as postmen, swagger, etc. Or you can use the plane or curl command from your
Linux or Mac terminal. And lastly, you can
also make an API called programmatically from within your code based on some logic. Now before we move on,
let's do some practical. I've got the terminal
window of my MacOS. And from here we will try to invoke a public API for jokes. First, I need to type curly. After that, I needed
to have the endpoints. So HTTPS, we do dot joke API, dot depth and then I need to provide the resource
on the endpoint. So I will type in slash, joke slash any and hit Enter. As you can see, the joke
server responded with a joke. Now, you can use
this API to create a joke section on your
website if you want. Or you can just read these jokes or to your
firm's up to you. Now, before we move on, I wanted to show you how to do the exact same thing using
a tool called Postman. Now here I've got the interview of this tool to create
an API request, I will click on plus. This will open up
the API form here, I can choose which
method I need. In this case, I need get, then I need to specify
the endpoints. So HTTPS, we do dot
joke API dot dev. And then I need to type in the resource which
is joke slash any. And to run this, all I have to do
is click on send. And you can see I've got a response from
the server again. Now this is a very
basic example, but since this is not
an STI API course, I will leave it to
you to get more knowledgeable on this topic
if you don't know it already. I've added some resources In the description
of this lecture, so you can use that as the starting point of
your further studies. Now, before we move on how
to cover one last thing, which is this status codes. Now if you see here, you will see Status 200, okay, and if you hover over it, you'll get a bit of an
explanation of the status scores. Now, every time you make
a request to the server, it returns a code
back to the client, which gives you the
status of the request. Now here on the screen, I've got various response types that you can expect from the server. Response that starts
with a 22 digits after that generally means that the request has
been successful. For example, as we
saw in postmen, we got a 200 OK, which means everything's fine. And it responds
with three or four, means that client can use the cached version of
the requested resource. Anything. For XXX, meaning four and then two digits
after that, or five, XX meaning five and
then two digits after that means an error for XX
means a client-side other, which means there's something
wrong on the client. Phi of x. X means it's a
server-side error, meaning there's some
problems with the server. So this was a quick
introduction to rest API calls. There is a bit more to
it and I want you to be a 100% confident because you'll use it for your management and
administration of Elastic Search. Please go through
the links that I've attached with the
description of this video. With this, we have come to
the end of this lecture. I will see you in
the next one. Bye.
7. Introduction to elasticsearch: Hey guys, Elastic Search is a distributed search
and analytics engine, which is the heart
of Elastic Stack or distributed search engine, is one where the
task of indexing and query processing
is distributed among multiple computers instead of a single supercomputer
handling everything. It is built up on Apache Lucene
and is developed in Java. It allows you to store, search, and analyze huge volumes of data quickly and in near
real-time and give back answers in milliseconds queries that would normally
take ten seconds or more than sequence will
return results in under ten milliseconds
in elastic search using the same hardware, Elastic Search can handle
all types of data, for example, structured
semicircle, etc. There are a variety of use
cases for Elastic Search. For example, you can
add a search box to an application or website for fast searching
of your catalog. You can also store
and analyze logs, metrics and security event data. Or you can use
machine learning to automatically model the behavior of your data in real-time. Elastic Search
provides a rest API for managing your cluster, indexing and searching your data for it search capabilities, Elasticsearch supports
three types of queries. Structured queries
allow you to search on structured data and are
just like SQL queries for text queries allow
you to search on analyze text fields such
as the body of an email here you'll be doing
things such as finding a particular word in a
paragraph complex queries, combination of structured
and full text queries. Now let's move on to documents and indices in elastic search, the smallest unit of
data is called a field. Each field has a
defined datatype and contains a specific
piece of data. For example, name John, no, documents are the base unit
of storage in Elastic Search. Or document is made up of
multiple fields in the world of relational database or document will be equivalent
to a row in a table. Data in a document is stored as a field which are
name-value pairs, and there are some reserved
foods in our document. Now here on the screen,
I've got an example of data that is
stored in a document. In this example, ID is the unique identifier
for the document. Type is the type of index. This document is a
part of an index, is the name of this index. As an example, document
scan represent an encyclopedia article or log
entries from a web server. Now in this example, we reference to something
called an index. So let's have a
look at what it is. An index is a collection of logically related documents that have similar
characteristics. For example, you can
have an index for users, you can have an index
for products and one for orders and so
on and so forth. It is the highest level entity that you can query against. An elastic search is similar to database in a
relational database. In Elastic Search is actually what's called an inverted index. And inverted index is a mechanism by which all
search engines work. Basically, this index
consists of a list of odd unique words that
appear in each document. And for each word, it consists of a list of all documents in which
that word appears. Now, let's have a look at
the following sentences that each container code from the
TV series Game of Thrones, winter is coming
our system fury, and the choice is yours. Now let's make our
in-order index. So first, we'll write the terms in all three documents
in one column, and then we'll
write the frequency as it appears in
all three centers, is in second column. And then in the third column
will write the document in which that term appears
about to take an example, winter appears only once in all three sentences and
isn't document font, whereas is the second
term in document one actually comes thrice and
appears in all three documents. So we've got 123 here. Likewise, that appears twice in document two and
document tree like this, we can construct
our inverted index, a combination of terms and a frequency is
called a dictionary. If we go back dumb, and frequency is called a
dictionary and the number of documents or particular term appears in is called postings. So this will be
dictionary and this will be postings Using this way. If you want to do a quick
search for any term, let's say fury Elasticsearch will look at it's
inverted index and find that fury a PS1 and appears on
document to therefore, an index term is the smallest unit of
search in Elastic Search, this is the trick
that Elasticsearch uses to perform fast searches. I've also included a link to an elastic search
blog that can help you further understand how search works
in Elastic Search. With this, we have come to
the end of this lecture. I will see you in the next one. Bye.
8. Elastic stack - a birds eye view: Before we start the deployment, I would like to give you a
brief overview of each of these elastic search products that we will deploy in this lab, Elasticsearch is an
open source list for a distributed such an
analytics engine depended upon Apache Lucene Log Stash is an open-source
server-side data processing pipeline
that ingest data from a number of sources, transforms it, and then
sends it to Elasticsearch. Kibana is an open-source data visualization dashboard
for Elasticsearch. In simple words,
let's say you have application log data from
application servers. Now, in case you do not need to do any processing on that data, you can send it to Elastic Search directly
to restore them index. However, if you do need to do some custom processing,
for example, you might need to add
a customer ID who are logged event so that you
can tie a particular log, log event to a
particular customer. You will first have to send
that data to Log Stash, Buddha processing
or transformation, and then send it to Elastic
Search to be stored. Once the data is stored and
indexed in Elasticsearch, you can use Kibana who create some visualizations
on top of their data. Now let's talk about
beat bead side, essentially like very
purpose-built agents that acquired data and then feed it to
Elastic Search beats or beat up on a
lit bit framework. And it makes it easy to create customized bits for
any type of data, if you'd like to send
to Elasticsearch, Beats had standard reefs
that are available, which I've listed on screen. If one of these beats
does not do the job, you can use the lipid framework to create your customized speed. So let's talk about some
of the standard beats. First, there's
something would fight, we'd find beat is a lightweight, cheaper for forwarding and
centralizing log data. In start as an agent on the target summer
fight big monitors. The log files are
locations that you've specified, collects, log events, and forwards them
either to Elasticsearch are log search for
indexing or deadbeat. Is that likelihood
cheaper that you can install on your servers to audit the activities of users and processes
on your system. For example, you
can use audit V8, the left and centralized
or events from Linux, but a different metric bid is slightly cheaper that
you can install it on your servers to periodically
collect metrics from the operating system and from services that are running
on the silver metric, big X, the metrics and
statistics it collects and shifts the output to either
elastic search for Log Stash. Heartbeat is a lightweight demon that you install on
a remote server. Periodically check the status of your services and determined
whether they're available. The difference between
liquids and heartbeat is that metric only tells you if your servers
are up or down. Whereas heartbeat, as you whether your
services are available. For example, it say you've
got Apache installed on a Southern make liquid will tell you if the
server is up or down. Heartbeat will actually tell
you if your website is done. Packet beat is a
real-time packet analyzer that you can use
with Elastic Search. Provide an application
monitoring and performance analytics
system that could beat, provides visibility
of communication happening between the
servers in your network. Generally beat is a lightweight
shipper for forwarding and centralizing log data from system read
the journalists. It is also install an
agent on your servers and it monitors general
locations that you specify, collect log events,
and then forward these events to either
elastic search on Log Stash, they're not beet chips. Windows event logs to either
Elastic Search or Log Stash. It is installed as a Windows service function beat is an elastic that you deploy as a function in your serverless
environment to collect data from Cloud services and
ship it elastic set. With this, we've come to
the end of this lecture. Thank you for watching. I
will see you in the next one. Bye.
9. How to install elasticsearch on a Debian Linux: Hey guys, In this video, we will install and
configure Elasticsearch and Kibana onto a single node
DBN based Linux system. The steps for this
lecture are first, we will install Elastic
Search public signing key. Then we will install APT
transport HTTPS package. After that, we will save elastic search
repository definitions onto our Linux system. Then we will do a
system update and then install Elasticsearch
and Kibana. Once the three applications
have installed, we will configure
Elasticsearch and Kibana. I will dedicate a separate
lecture for Log Stash. And finally, we will test
connectivity to our services. So let's get started. Here on the screen, I'm inside my GCP account
and on VM Instances page, we will use this page to create our single
node Linux system. We'll click on Create Instance. Then I will click on new
VM instance from Template, and then I will choose the ELK
template and hit Continue. Now from this screen, I can keep the name as ELK one, but I will change the
region to US West One. And then I will leave everything else as default and
go to networking. And under networking, I will make sure it has the
correct tag, Kibana. And I will also make
sure that it is under Elasticsearch VPC and
monitoring subnet. Once a validated these things, I will click on Create to
create my Linux instance. Now once this machine is up, I'll go to my Visual
Studio Code here. On the left-hand side, I've got the notes to install ELK Stack on a Linux machine. And on the right-hand side, I've got my terminal here. I will first go to my VM
Instances page and from here copy the external IP
address for my Linux instance. And then inside the
VS Code Terminal, I will type in SSH, elaborate at, and I will paste the IP address of my Linux
instance and hit enter. Here I will type in
yes, as you can see, I was able to successfully
connect to my Linux instance. Now I'll first type in clear, and now we can start
installing our applications. First, I will do a
sudo apt-get update. So sudo APT get
update and hit Enter. And now, once the
system is updated, I'll first clear out the screen. Then I need to install W gate if it is not already
installed on the system. So I will do that now. Now after these prerequisites
have been done, I can install the public
signing key for Elastic Search, so I will copy all this and paste it on the
terminal and hit enter. Once I've got the okay message, I can install APT
transport HTTPS package. So I will copy this command sudo apt-get install
epithelial transport, STDEV.S and the
yes and hit enter. Once this is done, I will
clear out my screen again. And now we have to save
the directory definitions. So I'll copy all
this from my notes and paste it on the terminal,
and I will hit Enter. Now once the directory
definitions have been saved, we can now install elastic
search log session Kibana. So the command is
sudo apt-get update, update the system first. Then we will install
Elastic Search, Log Stash and Kibana. So I'll copy all this and
I will paste it here, and I will hit Enter. Now once all three
applications have installed, it is now time to configure
our applications. So I'll first clear out of this. Then I will do sudo
SU and then I will cd into first ATC Elastic Search to configure my
Elasticsearch Service. Now inside Elastic Search
configuration directory, if I do an LS, there is a file called
Elastic Search dot YAML, which is the main configuration file called Elastic Search. So pivotal type in sudo, the Elastic Search dot YAML
to configure this file. In this file, the
first thing that we need to change is
clustered dot name. So I'll first go into Insert, then uncomment cluster dot name. This setting is used to set a descriptive name
for your cluster. So I'm gonna call my
cluster demo dash ELK. Now next setting I want to
change is node dot name. This setting is to setup a descriptive name for
this node on the cluster. So I'm going to
call it node one. Now once I've done that, I can leave the defaults
for path dot data and path dot logs and I will
need to go into network. Under network setting, I want to change is
network dot host. By default, Elasticsearch is only accessible on local host. We need to change that, so we'll change
this value, goes 0. Now next we'll uncomment
http dot port, but we will leave it as is
in a production environment. If you want to change the default port on which
Elastic Searches accessible, you will come here and
change this value. Now we'll leave discovery as is, because it's a single
node cluster will go right down and
towards the bottom, what we'll do is we'll put in a setting called
discovery dot type. So I'll type in
discovery dot type. And the value for
this setting is single node because this
is a single node server. It'll go double-check
this in my notes. Discovered dot type
a single node. So now we can save this file. I'll do WQ and hit Enter. And to start
Elasticsearch Service, I will type in system CTL, start Elastic Search
and hit enter. Now once the service
has started, we can first check the status of the service per
typing system, CTL. A status Elastic Search and the status shows
active running. So I'll Control C, then
less test connectivity to our ALK cluster, soil type in girl
minus x get as GDP and then localhost and then the port that elastic search
runs on 90 to 100. And then to check the
health of cluster, I'll need to type in
underscore plaster then held. And I want the output to be
in a human-readable format. So I will type in
pretty and hit Enter. Now the status came back as green and there is a single
node on this cluster, which is what we wanted. So elastic service is
installed and configured. I'll clear out of everything and next it's time to
configure Kibana. So I will go into CD, ATC, then Kibana and hit enter. Now here if I do an ls, you will see that there is a similar Kibana dot YAML file
that we need to configure. So I will type in sudo VI
Kibana dot YAML and hit Enter. Now inside the
Kubernetes YAML file will first go into Insert, then go to summer dot port, and then uncommented, but
leave the setting as is. This is the port that
Kibana and so on. In a production setting, you might want to
change it from 5601. Next we'll go to
several dot host, uncomment that and
change it from local host to 0 or
0 towards 0 to 0. This will allow Kibana to be accessible by the outside world. I've changed that. Now next setting that
we want to change a summer dot public base URL. So I will uncomment this. And then here what I need
to do is I need to type in http and then the private IP address of my
Kibana instance. So let's go to our
VM Instances page, and from here I'll copy
the internal IP address. Now here I'll type in http, then I'll paste the
internal IP address of my Kibana instance, and then I will
specify the port 5601. Next, I need to specify a descriptive name for
my Kibana instance. So I will uncomment
server dot name, and here I will type
in demo dash Kibana. Next I need to specify the URL where Elastic Search
host is accessible. So here under Elasticsearch
taught hosts, I will leave the value
as default because it is running on the same server where Elastic Searches running. So we can leave it as is now, we can save this file and now
it's time to start Kibana. So system CTL, start
Kibana and I'll hit Enter. It looks like I
made a typo system CTL start Kibana
and I'll hit enter. Now, once the
service comes back, I will do a system
CTL status Kibana and the services
activity running. So let's go back to
our VM instances. From here, I will copy the external IP address
of my instance. I will open up a new
browser tab, paste it here, and then specify the port
where Kibana is running. So 5601 and I will hit Enter if Kibana is installed and
configured successfully, you'll start seeing
loading elastic first. Now once Kibana has
loaded properly, you'll see welcome to elastic. And if you click on
Explore on my own, you'll get to the UI of Kibana. Now there's one last
thing that I wanted to show you before I let you go, which is if you want your
Elasticsearch and Kibana application to
start automatically when you restart your system, what you'll have to
do is you'll have to type in system CTL enable elastic search and system CTL enable Kibana, an
altar move the dot. And with this, we have come
to the end of this lecture. Thank you for watching. I will see you in the next one. Bye.
10. How to install elasticsearch on a RPM Linux: Hey guys, In this lecture we
will install and configure elasticsearch log session Kibana on an RPM based Linux system. In order to do that, we will first have to install
Elastic Search GPG key, and then we will
have to configure our elastic search logs
session Kibana repositories on the RPM system. After that, we will
install our applications. Then we will configure
Elasticsearch and Kibana. And after we have done
with the configuration, we can start Elasticsearch and Kibana service and
test connectivity. So let's get started. Now here on screen, I'm inside my GCP account. And under VMI instances, you can already see the Debian based Linux system on which we installed our single node
elastic search system earlier in the
previous lectures. Here I will click
on Create Instance, and then I will click on new
VM instance from Template, and then I will choose ELK. Then I'll hit Continue. We'll leave the name as ELK T2, but we'll change the
region to US West One. Then I will go down
and under boot disk, I will click on change. Here. Under operating system, we
will click on the drop-down, and here we will choose
Red Hat Enterprise Linux. Now, we can change
the version of Linux that we want
for this machine. I leave it as default
and I'll hit Select. Once I've done that, I will go down to networking. And under networking, I'll make sure I've got the
correct network tag, Kibana, and I'm inside the elastic search VPC and
under monitoring subnet. And then I'll click on Create. Once my ELK two instances
up and running, I will copy the
external IP address and then I will go to my
Visual Studio Code. And on my Visual Studio code, on the left-hand side, I've got the notes to install Elastic Search on an RPM system. And on the right-hand side, I've got a terminal window under terminal type in SSH
and then lab it at. And then I will paste
the external IP address and hit Enter. Click on yes. Now once I've connected
to this instance, I'll click Clear and now we're ready to start
installing our services. First we have to import Elasticsearch GPG
key on this system. So I will copy all this and I'll paste it on my system
and I'll hit Enter. It. Cannot open index using
the V5 permission denotes. What I'll do is I'll run the
same command using sudo, sudo and I'll hit Enter
and once the key is successfully imported will start installing
RPM repositories. So first I will have
to create three files, Elastic Search dot repo, Kibera, repo, and log Slashdot repo inside my young
daughter repost or D. So I will copy first touch
command and hit Enter. I'll have to type in sudo first. Now I'll move into sudo SU. Next I will copy the
second touch command, and then I will copy the
third touch command. So now if I go into
my young daughter, repost or D, nephron do an ls. I've got my three
files, Elasticsearch, short repo, logs,
gesture artery Po, and Kibana dot report. Next we will have
to configure files. So first we'll do VI on
Elastic Search short repo, and I'll go into Insert
and then I'll have to configure these settings
inside Marie profile. So I'll click on paste and
I'll do some formatting. Now this repository
definition is nothing but first we've got the name of
this configuration setting, then we've got the URL from
where to get the package. Gpt checks GPG key whether it's enabled
or not and the type. Now, I'll save this file. Next. We'll do a VI on
Kibana dot repo, and they'll go into
insert and I'll copy all the settings for my
keyboard or three profile. I paste them here. And again I will have
to do some formatting. And now I can save
this file as well. And finally, I'll have to edit my log stretched artery profile. I'm going to insert a copy the settings for log
stretched or triple, and I'll save this file. Now you don't have to worry about this
configuration settings. What I will do is I will
attach my notes and the description link
of this video so that you can have access
to these as well. So once we've configured
our three repos, now we can install everything. So first we have to
install Elastic Search and the command is sudo yum
install enabled repo. And then it references the Elastic Search Report
that you've just configured. And then the service
name is Elastic Search. So I'll copy, I'll paste it
here and I'll hit enter. If the insurer asks you to download an elastic
search package, just say yes and move on. Now once Elastic
Searches and stored, we can install Kibana and Log
Stash In type in sudo yum install Kibana to install Kibana and hit Enter,
I'll hit yes here. Now once Kibana is installed, you can type in sudo yum install Log Stash to install Log
Stash NAL head, yes. Now once log searches
also installed, it's time to configure
our services. This would be similar
to what we've done for our DBN based system. If you've already done that, you can probably configure these on your own
if you haven't, please follow along
on this lecture. First, we'll configure
Elasticsearch. So CD, ETC slash elastic search. Here, if I do an ls, I will see that there is a file called Elastic Search if I am, and this is the configuration
file for Elastic Search. So let's edit that. So I'll type in VI Elastic
Search dot YAML and hit enter. Now here on this file, I'll first go into Insert. And the first setting
that I need to change is clustered dot name. I'm going to change
that to demo dash ELK. Next I wanted to change
the node dot name setting, so we'll change this
to ELK dash one. After this, I can
leave the path toward data and path dot
log setting as is, I will have to go under
network and inside network what I'll do is I'll
go to network dot host, uncomment that and change this value to $0 or 0 towards 0. Now what happens is by default, elastic search is
only accessible from localhost by changing
it to 0 or 0 or 000. We are exposing this to
anyone in the world. Next, we'll uncomment
STDP dot port, but we'll leave the
port as default 9200. If you are running a
production cluster, you might want to change
this to some other port, but we'll leave this as is. Now finally, I'll go right
at the bottom of this file. And here I will type in a
setting for discovery doctype. Now this settings value would be single
load because we are configuring a single node Elasticsearch cluster
or discovery dot type. And the value is
single dash node. And now I can save my
Elasticsearch struck YAML, so I'll hit Enter now I can start my Elasticsearch Service. So system CTL start
Elastic Search. Now once the service is up, it can do status to check
the status of the service. It is active and running. Now to check connectivity
to our cluster, we'll type in minus x, get STDP and then localhost, and then the port
that Elastic Searches running on 90 to 100. And then we'll connect to the cluster API and the
health service on that API. And what we want is we want the output in human
readable format. So we'll type in
pretty and hit Enter. And as you can
see, the status of our cluster is green and
the number of nodes is fun. Now, we can start
configuring our Kibana. So I'll go into slash,
ETC, slash Kibana, and I'll clear out
of everything and do an ls inside this
given or folder, there is a file called
keyboard AutoML, which is the configuration
file for Kibana. So I'll do VI Kibana dot YAML. Now I'll first go into Insert. The first setting that I want
to change is our dot plot, but I'll leave this value as is, because this is a demo system. If you were in a
production system, you might want to change this
value to something else. Then I'll uncomment
server dot host, and then I'll change it
from localhost to $0 dot 0. So again, this makes it
accessible to the outside world. Now several dot
public base URL is the URL on which
Kibana is accessible. To get the value for this setting will have to
go to our VM instances. Then under IHL k2, I'll copy the internal IP
address and I'll type in http, and I'll paste the
internal IP address, and I'll provide the
port which has 5601. Next we'll change
the server name, but I'll go into self.name, uncomment it and give it
the name demo dash Kibana. Then I'll go to Elastic
Search short hosts. And I can leave the default
value because this Kibana instances running
on the same server as the Elasticsearch
services running on. And once I've done that, I can save my
configuration file. And now I can start Kibana. Once gibbon I started, I can do system
CTL status Kibana and the service is
active and running. Then I will go to
my VM instances, copy the public IP address
of my Kibana instance, based it on the browser tab
and type in 5601 to connect to Kibana service
on the server will be able to get to the Kibana UI. Now once you get to this screen, you click on explored on my own and they should take
you to the Kibana UI. Now before I let you go, I want to show you
one last thing. Now, if you want your
Elasticsearch and Kibana services to come back up automatically
after the system reboots, you have to type in system
CTL enable elastic search, enable the Elasticsearch
service and system CTL enable Kibana, enable the Kibana service. With this, we have come to
the end of this lecture. Thank you for watching. I
will see you in the next one. Bye.
11. How to Install ELK on Windows server: Hey guys, In this
lecture we will install elastic search
on a Windows server. To start off, we will
download some software. So first we will download
Elasticsearch and Kibana. After that, we will
install it called NSSS are non sucking
service manager. This tool helps you install
an application as a Service. Now the reason we
need this tool is because Kibana
does not come with the free package utility that can help us install
Kibana as a service. So we will use NSSA
for Elastic Search. We have a utility
called Elastic Search In this dark back inside the
elastic search zip file, which will be used
to install an asset such as a Service
on Windows Server. Now, after we've
downloaded NSSS, we will download
and install 7-Zip. Once the Downloads
have finished, we will configure Elasticsearch and given our applications, then we will run
them as a Service. And once they've
successfully started, we will test the connectivity to our Elasticsearch Cluster and other Kibana instance in
your sketch started here. I've got my window somewhere, wherever I'm doing installing Elasticsearch and
Kibana on this server, first I will open
up a browser tab. And on the browser
I will type in classic search Windows
download and hit Enter. And from here I will go to
the URL that says elastic. It downloads an elastic search. So I'll click on this link. And once I'm on this link, I will make sure I'm
on the GA release tab. And from step one we will download classic such
what if Windows, I'll click on this
Download button. And while this has
been downloaded, I'll open up another
tab and I will type in Kibana windows download. And here as well, I will go to the downloads URL for Kibana. And from this URL as well, I will go to step one, download and unzip Kibana. I will make sure the black
foam is chosen as Windows, and I will click on the
download button for Windows. And this should
start the download for my Kibana process as well. Now while this is
being downloaded, I will open up another
tab and I will type in download and SSM and hit Enter. And from the results, it choose the link for an
SSM dot cc and downloads. And from this webpage, I will go under Section
latest release. Haven't click on the
utility and SSM 2.24. So I'll click on this
and now I will open up another tab and I will type in download 7-Zip and it enter. And from the 7-Zip DOT
ORG download page, I will click on the
link to download the utility for 64-bit Windows. So I'll click on this
download link here. Now once all the files
have downloaded, I will first click
on 7-Zip installed, 7-Zip on the system. From the installer dialog box, I will click on run, and from here I will choose the default destination
folder and click on Install. And once someone's IP has
installed, I can click on. And now I will open up
my downloads folder. Now once inside the
download folder, you can right-click
on Elasticsearch zip, and you can use 7-Zip
to extract the files. So I'll click on Extract Files. And from here, you can choose the path where you want
to extract these files. Now what I've done for this
lecture is I've already accepted these files inside
my documents folder. I will go to My Documents
folder and as you can see, I've got an elastic
search folder here, and I've also got
a Kibana folder. So let's start
configuring these. I'll open up the
Elastic Search folder, and from here I'll choose
Elastic Search. Again. I will go to conflict. Inside the conflict folder, we need to edit the
Elasticsearch short YAML file. So I'll right-click on this file and I will use VS Code
to edit this file. You can use the editor
of your choice. So feel free to open
and edit this file. Now once I've got the Elasticsearch
short YAML file open, what I'll do is the
first setting I need to change is
clustered or name. And I will change this
to demo dash ELK. Then I need to change
the node name. So I will change this
to ELK dash one. After that, I do not need
to specify a path to my data directory or logs directory because this
is a demo instance. So I will keep, it says default. Now I will go to
the Network section and uncomment network dot host. And I will change
this value to 0. After that, I will
uncomment http dot pork, but I will keep the
port as default. And finally, I will go towards
the bottom of this file. And here I will type
in discovery dot type, and I will give the
value single dashboard because this is a
single Norton solution. Now I can save this file
and once I've done that, I can start my
Elasticsearch Service. So to do that, I will go into my search box and I will
type in PowerShell, and here I will choose the Windows PowerShell
desktop app. Now went inside my
PowerShell console. I will type in cd if I
need to go to Documents, and then I need to go to Elasticsearch folder and
then Elasticsearch again, and then been inside
bin if I do an ls. Now here you will see that
there is a file called Elastic Search service dot Ph.D. Now we need
to run this file. I'll firstly it
out of my screen. And from here I will type in dot slash Elastic Search,
dash, service dot. And here I need to
type in installers. So dot slash Elasticsearch
service dot VAT space install, and I will click Enter. Now, this should start the installation of an
asset such as a Service. Once this has completed, what you can do is you
can go to services dot MSC from your search box. Here, you can find
Elasticsearch service, and now you can
start the service. But first, we'll change
the startup type on the service from
manual to automatic so that the service
will start up after the cellular boots automatically
go into properties. We'll go to startup
type and change that to automatic
and click on Apply. Once you've done that, you
can click on Start to start. The service are
gonna click on OK. And you can see that
the services running. Now, let me maximize
my publishing. Now here, you can also use the same utility to manage
this service as well. Let me show you the usage. If I'd put in dot slash classic search service
stored VAT and hit Enter. Now you can see the usage. You can install a
service removal service, start stop, and you
can manage a service. Now, let's try
stopping this service using this utilities so I can type in stock and hit Enter. Now got the message that this Elasticsearch
Service has been stopped. And I can validate that
by going into my services dot MSC and click on Run
this Refresh button, you can see that the
service was stopped. Now let's start the
service and hit enter, and we have the message
that services started. So let's validate that again. So I'll click on refresh and you can see that
the service was running to test connectivity to our Elasticsearch instance, what we have to
do is you have to open up the Chrome window. And from Chrome, Let's
open up another tab. And here let's type http
slash slash local host and then bought
90 to 100 and hit Enter nervosa services up. You'll see some information coming back from your cluster. Now, what you can also do is you can look at
the cluster health. To do that, you will type in slash after the
URL and underscore plaster another
slash and head and then question mark and then pretty liquid the cluster head. Now you can see the
status of the cluster is green and the number
of nodes is one. So this is how you can install elastic search
on a Windows machine. Next, we will configure
our Kibana instance. Now to configure our Kibana, I'll open up the file explorer. And from here I will
go to Documents. And I'll open up
my Kibana folder. And then Kibana folder
again, and then config. And inside Config, I'll need to edit the Kibera TML finance, so I'll right-click
on it and click on open to open this
file in an editor. Now inside this file, I'll uncomment solver.py,
but leave it as default. Then I'll uncomment
server dot host and change the value from
local host to 0 dot 000. Next, I will change the
several dot public base URL, so I'll uncomment that. And to get its value, you need to go to next
netflix settings. And from here, I'll
click on Network again to get the IP
address of this machine, which is standard
0 dot to dot 15. So the public base URL will
be STDP ten dot 0 dot dot 15. And then the book took the 601. So this is the URL on which give honor
will be accessible. I'll go down a bit. Then next, I will change several dot name
and I will change the descriptive name of my Kibana instance from your host improved
demo dash Kibana. Next, I will uncomment
Elastic Search dot hosts. But I can leave this as default because this Kibana instance is installed locally on the same server with
elastic searches. Now next, by default, Kibana logs, It's
logs onto STD out. We want to change that
and we want to configure our Kibana or log its logs on
blood log while somewhere. To do that, we need
to go down and find the line that
says login dot t ESD. It is online 97,
I'll uncomment this. And before I change its value, I'll open up my file explorer
and I will go one level up. And here I will create a
new folder called logs. And I will open that. And then I will right-click
on logs in the top bar. And I will copy addresses texts, and now I will replace
estradiol with this text. Here, I will make another slash and I went to call my log file
Kibana dark low. So with this, Kibana will
write it logs into this file. Now, we can also
uncomment logging dot silent and
logging dot quiet, but we'll keep these
values as false. Now if you want verbose logging, you can uncomment
this and change its value from false to true. This will give you
granular level details of Kibana is locked. Now, we can save this file. Now before we use an SSM to
start Kibana as a service, I want to show you how to
start Kibana manually. So I will go to my
PowerShell and from here and type in cd
and from documents, I will go into Kibana
and then Kibana again. And then from inside Kibana
are going to bend this time. And here if I do an ls, you will see that there is a
file called Kibana dot dot. We need to run this file. First. I will clear out of this, and then I will type in dot slash Kibana, and
I will hit Enter. Now after this, let us move
on to our browser window and type in http
slash slash dot, dot to dot 15 and are given a port 5601 and
I will hit Enter. And this should take
me my Kibana page, not depending on how powerful
your Windows server is. It might take some time
for Kibana to load, so just give it some time. Now after this, let's see how to install Kibana as a service. First, I'll open up PowerShell and I'll stop this instance. Press Control C and it enter and managed system minute
backdrop I can type in, yes, I will dominate this. And once I'm out of this, I will go out of by Kibana, will open up my
file explorer who downloads, extract an SSM. So 7-Zip. If files, and I will choose documents and click OK and then OK again. And once an FSM is
inside my documents, I can go back to PowerShell and CD and SSM and do an LS here. Now, here there's another
folder called an SSM, so we'll cd into that
and do an LS from here. I need to go into
when 64 because my machine is ever
know 64 machine. If you're on a Windows
32-bit machine, you need to go into
this directory. I will go into when 60 fourths, so CD in 64. If I do an LS here, there'll be an EXE file
called an SSM dot EXE. So I will do dot slash
and SSM dot EXE. And to show you the usage, I will it ended here. Now the usage is unnecessary, one of the options and
then some arguments. Install a service, we need
to use an SSM install. So here we will
type in dot slash and SSM EXE space in store. And now we need to
give the service name. So Kibana and hit enter. Once you've hit Enter, it opens up a dialogue box. And inside that dialog box
on the application tab, we need to specify the path for the batch file for Kibana. I'll click on this button here. Then we'll go to Documents, Kibana, Kibana, and then bin. And from here I will
specify keep on our batch file and click on open now and open up
the detailed step. I can type in some details, so I'll type in Kibana and description will
be Kibana as well. You can specify the startup
type can be automatic, automatic, delayed,
manual, or disabled. We'll leave this as automatic. You can specify the login
as information here, dependencies of this service, the process priority
shutdown behavior. But we leave everything as
default and click on install service after it's given me a confirmation that Kibana
services installed, I can click on Okay. And that will take
off the dialog box, protect the status
of Kibana service. I can type in NASM, cmd.exe space status Kibana, and hit Enter there, it says Service stock. First, we have to start the service nsf dot EXE space stark space key
button and I hit Enter. Now I've got confirmation that this service operation was
completed successfully. So I'll go to my
services dot MSC window, refresh this, and I'll
go to my Kibana service. And I can see that the
Kibana services running. So I will open up
another browser tab and hit Refresh and
safety viral loads. It might take some time again. I will post the video here. Now it took a few minutes for my Kibana incense
to come back up, but it is up now I can
click on Explore on my own and start working
on my Kibana UI. Now before I let you go next, validate that our
Kibana instance is logging on to the log
file that they specify. So I'm going to
documents Kibana, Kibana, and then
I'll go into logs. I can see that my
Kibana dot log file is there and it has some
data centers. Open it. And here you can
see that Kibana is writing x log data
inside this log file. So with this, we have come
to the end of this lecture. I've shown you how to install Elasticsearch and Kibana
on a Windows Server. Thank you for watching. I
will see you in the next one. Bye.
12. Securing your cluster using X-pack security: Hey guys, In this lecture
we will configure expects security on our
Elasticsearch Cluster. First, we will stop Kibana
in Elasticsearch service. Then we will enable
expects to go to day inside that Elastic Search
configuration file. Then we will start the
Elasticsearch service. After that, we will set up default credentials for
Elastic Search built-in users. Then we will configure
Kibana to use password authentication
to connect to our Elasticsearch Cluster. And finally, we will start our Kibana service and
test connectivity. Let us get started. Here on the screen, I've got
my notes to configure expect security on our
Elasticsearch Cluster and on the right-hand side, my SSH connection inside
my ELL given machine. This is a single
node ELK cluster that we deployed
on a DBN package. So here, first, I have to stop
Kibana, an elastic search. So I will type in system CTL, stop Kibana and systems
CTO stop elastic search. Once I've strong board
services first I will have to enable expect inside Elastic
Search struck Yammer. So we'll go to Elasticsearch. And then here we will type in
VI Elastic Search dot YAML. And on this file will go
right down to the bottom, and I'll go into insert mode. And towards the bottom, I will add the setting expects
security enabled as true. So I'll copy it and
I'll paste it here. And now I can save this file. Now next I will have to start
my Elasticsearch cluster. So I'll type in system
CTL, start Elastic Search. Now once the Elasticsearch
services started, let's have a look at
the status by typing in system CTL status Elasticsearch, it is active and running. Now in our previous video, we used the curl command to test connectivity
to this cluster. Now, if I do a Control
R and type in Curl, and if we run this command
again, let's see what happens. We get a security exception that we are missing the
authentication credentials. What we'll have to do now
is we will have to generate the authentication credentials
for our built-in users. So I'll clear out the screen. And to do that, we will have to go to usr, share Elastic Search
and then bin. And if I do an LS here, you'll see there's a lot of
utilities inside this folder. The one we're after is the Elastic Search setup
passwords utility. So I'll type in dot slash Elasticsearch dash
setup passwords. And here I have two modes. I can either go into interactive mode and set
every password myself. So let me show you
that first I'll type in interactive
and I'll hit, and it says initiating the setup of password
for reserved users. And it gives a name of users
and it says you will be prompted to enter passwords
as the process progresses, please confirm that you
would like to continue. I'll type in here first. What I would like to is for Elastic Search to automatically generate passwords for me. So we'll do that. I will
type in auto and hit enter. And now it says again, but this time it says the
passwords will be randomly generated and printed to
the console using auto, the passwords are
auto-generated, whereas using
interactive, they are manually entered by the user. So we'll type in yes, to automatically
generate my passwords. I've got the password is here, so I'll copy all these and
I'll pay inside my notes. Now once I've done that, let me clear out of this. Let's do cd to get out
of everything and clear. Now, let's run our
curl command again. But this time what I want
to do is I want to use the minus u option to specify the username and password
right after Carl, I will type in
minus u and I will type in elastic as the username. Elastic is a superuser on
the Elasticsearch cluster. And then I will specify the password for my
elastic superuser. So I'll do control C. I'll paste it here and then
I'll put a space. And now if I hit Enter, you'll see that I'm able to
get a cluster information. Again, that means our
cluster is correctly set up with expects security and
password authentication. Now next, Ikebana instance, if we try and start it now, will give us an error because it cannot connect to Elasticsearch. So we have to supply the
Elastic Search credentials for my Kibana instance to connect to the
Elasticsearch cluster. To do that, we'll
do cd ADC, Kibana, and then under Kibana will
edit the Kibana dot YAML file. And inside this file, I'll go into insert and I'll go into the section
where it says Elasticsearch taught username
and the built-in username used for this integration is
Kibana underscore system. So this is username. So all we need to do is uncomment
the password and remove the default value and
copy the password for Kibana system user
and paste it here. And once you've done that, you can save this file. And let's start Kibana service. I'm CTO star Kibana. Now let us do a status and the service is
active and running. Now when we load the
Kibana web page, it should ask us
to authenticate. So I'll close this tab. I'll copy the public IP
address of my machine again, and I'll paste it and
I'll type in 5601. And let's see what happens
now at this time it is asking us for the
username and password. And here under username, I'll type in elastic and
under password I'll paste in the password for elastic
superuser a login. And now you can see that
I'm inside Kibana using my built-in user
elastic to look at all the user profiles on
this Kibana instance. You can go into management
and then you can go into stack management
and understand management. You can go into users. And from here you'll
be able to see all the built-in users
that were created when we configure it expects
security on this system. With this, we have
come to the end of this video. Thank
you for watching. I will see you in the next one. Bye.
13. Adding data to our elk cluster using Kibana: Hey guys, In the
last few lectures, we've installed our
Elasticsearch cluster. Now it's time to get some
data into our cluster. First, we will upload
non time-series data, and then we will upload
time series data. And then I will show you the
difference between both. So let's get our hands
dirty here on my screen. I've got my
Elasticsearch instance. So I will copy its
external IP address, open up another browser tab, paste it, and there have been 5601 and connect to
my Kibana instance. Now once I'm on the homepage
of my Kibana instance, there is a pattern
called upload a file, so I'll click on it,
upload some data. Now, using this pattern, you can upload files
up to a 100 MB in size and they can be in
the following format. It could be a delimited
text file like CSV or TSV, or delimited JSON file, or a log file with a common
format for timestamp. For this lecture, we will upload delimited text files which
could either be CSV or TSV. Now to get some dataset
for this tutorial, I'll open up another
browser tab and I'll go to a website
called kaggle.com. And on Kaggle.com I
will go to datasets. So I'll hit enter. Now
under this website, you will have to
register or sign up to this website to
download the datasets. And once you do from the
left navigation menu, if you are on datasets
in search for the datasets using this
search status Exit button. Now, you can also filter for
the datasets that you want. For example, because I can only upload up to 100 MB files, I will type in 100 MB here
and type in, in type CSV. And the initial file
size could be from 0 and b and military apply. Now from the results, I'll click on the dataset
called Netflix shows. And here I can see the size of this dataset which
is around 90 KB. If I click on compact, I can see some sample
data inside the dataset. And now I'll click on download
to download this file. This file downloads
as a zip file. So I'll show in Finder
and from a Finder window, it's in my downloads. So I'll double-click on
it to extract this file. And now you can see I've got the file netflix dot csv on my machine now will
go back to Kibana. I can either drag
and drop this file, but I can upload this file. So I'll go to my Downloads here. I'll search for this file. So netflix dot CSV, and I'll click Open. Now what Kibana
does is it analyzes the first thousand
lines of this file and gathered some
data about this file. For example, it has
found out that it is a delimited file and the
delimiter is a comma, and it also has a header row. Now we can click on overwrite setting to override some
settings on this file. Or I can look at
analysis explanation to find out how Kibana came
to this conclusion. So I'll click on
that and you can see some explanation of this
analysis I'll closest now, you can also see
some file starts. For example, there are 13 distinct rating
values in 999 documents. Once I've gone through all this, I will click on import
to import this data on the next page asks me
to create an index. I can either do it using simple method or advanced
method for this demo, we will just use the simple
method and the index name. I will have to specify the name of the new index that
will be created. So I'll call it Netflix index. Now, again, index is the way how you store data
in Elasticsearch. The next option is to
create an index spectrum. Now compared to an index, an index pattern is how you can search and analyze
that data in Kibana, index is for storing
data in Elasticsearch, an index pattern is
to analyze that data. And Kibana, I want an index
pattern to be created because I do want to visualize
this data and Kibana. So I will click on import. Then you can say it first
processes the file, then creates the index, creates ingestion pipeline
and uploads the data, and then creates an index
pattern for this data. Once all this is done, you'll get an import
complete message. You can either do this data in discovered to some
index management. For example, you
can specify when this data again be deleted
from the index etcetera, etcetera manager, index button. And we will talk about
File bit later on. But for now, we do want
to discover this index in our historic tab to
get to the Discover page, another way is by clicking on these three lines
from top-left and then under analytics and then discuss under discovered page
on the left-hand side, the first piece of information
is the index urine. If you had multiple indexes, you can click on
this drop-down and select the index
you want to be in, then you have the
available fields. And if you have more than
a few fields, for example, some indexes might have
100 meters or more, can also search for a
field names on this box. If I wanted to let say search
for rating and I'll get all the fields which have
the word rating oil. Remove that. Now on the right-hand side, the first piece of
information is the number of hits or the number of
documents in this index. So we have 1000 documents and then each individual document, if I expand that, you can see all the fields and their values that are
in this document. Now, against each field
you will see four buttons. The first button is
filtered for value. So if I wanted to, in
my index filter for all the documents that have
a rating description of 80, I can click on this
button and it'll show up my filter on
the top left here. And it'll filter the results
on my discovered tab, all the results that
match my filter. So you can see out of 1000, I've got 15 hits where the rating description
was 82 removal filter. I'll just click on this. Xnor debt was when you want to filter for a
particular value, you can also filter out a value. For example, in this, I want all the movies which
were not released in 2004. Filter out for the
release year 2004. So I'll click on this minus, and this will give
me all movies where the release year is not 2004. And again, I'll remove this and experiment
of the documents. Now next is toggled
columns in a table. So on this table there
are no columns as of now, if you want to introduce
columns to make your table look neater
for you to search on. For example, if I want the
first column to be title, I'll comb against title. I'll click on Toggle
column in table. And now you can see all my
documents are sorted by title. Next, I want my
column to be Elisa. I will click on Toggle column in table next to release year. And next I want my next column
to be user rating score. And I will click on
Toggle column in table to include user
rating score as well. Not to remove any of
the toggled columns, I'll simply go to that field
and I'll remove that column. Now I can sort by
this column as well. For example, if I
click on this arrow, it says sort user
rating score ascending. So let me minimize this. And if I click on that, it will sort the
user rating score in an ascending manner. And if I want to move
it to descending, I'll click on this again and it'll now sort this
by descending. Now next you can use
this search bar or search your dataset using
Kibana query language. So first let us see
all movies that were released in the year 2
thousand from my fields. I will look at which field
gives me release here. So the field is actually
called release year. So I'll type that Elisa and now I can apply some operators. For example, colon
means this release here is equal to some
value less than, equal to is basically the
Louisiana less than or equal to some value greater than or equal to is what it
suggests, less than, greater than colon and a star to specify if my dataset
contains a release year in any form that will filter out all the documents which have null or empty values for
this particular field. Let's say in 2004. And now I'll click on update. And this will show me all the titles that
were released in 2004. To double-check that I'll
add released here by clicking on this plus
sign on my table as well. So plus, and now you can
see it a little easier. Now let's say I want to add one more criteria to my search. What I can do is I
can put a space after my initial search and I
can either do an AND or, OR ADD is basically both
arguments have to be true. So I'll type in N And in
my result set this time, I only want those movies which have a user rating
score of 80 and above. So I'll type in user and select the fill
user rating score. And I want the value
to be 80 and above, so greater than
an equal to sign. And this time I'll type in
80 and click on Update. Now, out of those
initial attempts, we only have these four hits. So this is how you can use Kibana query language to
search on your dataset. Now if you want to save
this query, for example, if this query is
repeated multiple times, you can save this by clicking
on this floppy disk icon. I'll click on that and I will click on Save Current query. And I will say my demo query
as the name of this query. Now because our dataset doesn't
have a timestamp field, I'm not going to specify
include time filter, but if there were any
filters in my dataset, if I wanted to include them, I can pick that as yes, it I didn't put in any filter, so I'll remove that
and I'll click on Save and my query was saved. Now next time if I want to
do a reuse my saved query. So let us say if I
remove all this, click on Update to get
to my original set, and click on this
drop-down next to the floppy icon and
click on my query. You can see I can quickly rerun the same query
against my dataset. Now next, next, download some time series data
going back to Kaggle, I'll go to datasets
are under datasets. I'll click on Filters again. File size would be 0 to 100 MB. And on tags I will
type in time series, and then I'll click on, It's now out of the reserves. I'll click on the first result, omicron daily cases by country. And here if I go down I
can see the status that is roughly four MB and there is a single file COVID
variance dot CSV. And I can click on compact
look at some sample data. So let's download this data. Now once this data
file is downloaded, I'll again open finder,
extract this file. Now I've got the
COVID variance file. So I'll go back to
my Discover page, go to homepage of Kibana. I can click on upload a file. Here. I'll show you another method
to do the same thing. So I'll go here and
then under management, I'll go to integrations. Here I can see
there are a lot of default integrations
built into Kibana. So first I will type
in an integration for if I click on upload a file to get to my upload
a file integration. We'll talk about integrations
in an upcoming lecture. Again, I'll select the
file, so COVID variance, I'll click on Open, and this will analyze my data. Now, after analyzing
the first 11000 lines, It's found out that the data
is delimited by a comma. It has a header row and
it also has a timestamp. And the format for the
time field is ISO 8601. Now I'll click on
Import and I'll use the simple method
again to specify an index and I will
give a name to this index of COVID
underscore index. Now I can click on Import. Now once you've got the message
saying import complete, let's view this
indexing discover. Now here, if I click
on this drop-down, you can see that I
can switch between index is using my
index drop-down. I'll go back to COVID index. I can see all the available
fields on the left side. And what's new this time
is this time filter. So basically my data contains time field values between
May 11th, 20252022. I can use that to basically narrowed down
on a specific date. So let's say I want
to only look for data between first
of December 20, 21st of January 2021. So what I can do is I'll click on this button against started, and here I have to
specify a start date. So I'll click on
absolute for now, and I'll click on December. And I want this to be
first of December 2020, and it can start at 0. If I click on that, next, I'll click on ended. And here I will specify 2021. I can leave this as
January and I'll click on one, which
was a Friday. Now I can specify
some other time, but the data only
up to 2330 or 1130. Right-click on that and
I'll click on Update. It'll only show me the subset of data that matches this
time for your reference, this is the difference between time series data and
non time-series data. If you look at your
Netflix index again, there was no time filter
for you to filter on their data because our data
did not had a timestamp. Now going back to
the COVID index, you can also specify
relative time frames. For example, you can
specify data from, let's say, a month ago, if I have one and you can specify one of the
lateral fields, for example, 1 second, 1 are one-minute, etcetera, etcetera. So click on that. And what ended? I go Relative again. That is say from a month
ago, ten days ago, from a month ago
to ten days ago, and click on update. And this will give me all data from a month ago
to ten days ago. Now, instead of ten days ago, I can also specify now and set the end date
and time to now, and it will give me our
data for last one month. So this is the
difference between time series and non
time-series data. You do not have a
time filter value in non time-series data. With this, we have come to
the end of this lecture. Thank you for watching. I will see you in
the next one. Bye.
14. Creating visualizations with Kibana: Hey guys, Now that
we have some data in our cluster and we can explore that data using
the Discover tab. It is time for us to
visualize this data. Data visualizations allow
your business users to make sense of large amounts
of data very easily. And this is where you can
really see the power of Kibana. There are two ways to create
visualizations in Kibana. First, you can create dashboard
specific visualizations, or you can use the visualization library where you can add visualizations which you need on more
than one elements such as dashboards
or canvas work pads. Now, there are a
few options for you to choose when you create
your visualizations. First, you have lens, which is a simple
drag-and-drop editor. Then we have TSV, which is used for advanced
analysis on time series data. After that, you can also
use aggregation based, which is used for aggregation
based chart data. You can also choose maps if your data has
geolocation coordinates, you can also create custom visualizations
using the vaguer syntax. This is out of scope
for this course. We will not talk
about this option. Now let's get started. In this demo, we will
create a visualization that shows the average movie
rating score by release year. First it to the left
navigation menu and under analytics will go
to visualize library. From here, we will
click on create new visualization to create
our first visualization. And then we will choose lens. Now on the lens editor, first from the left menu, you can choose the index
you want to work on. For example, if I wanted
to work on COVID index, I will choose COVID
index and it'll show me all the fields available
inside COVID index. But for this demo, I will
choose Netflix index in the middle is where
your graph will show. From here, you can choose
the type of graph you want. So for example, if you want
a metric type visualization, a bar visualization
line, donut by etc. for this demo, we will
choose a bar vertical. Now the right-hand
side is where you will specify which fields
go onto which access. First we'll have to define a horizontal axis
for our drafts, I will click on Add or
drag-and-drop a field. And from here I will
choose the drop-down for selector field and
choose release here. Now here I can increase or decrease the granularity
of this field, but we'll leave it as default. We can change the display name, so I will capitalize the
first alphabet of each word. You can also specify
the value format. I will leave it as default
and I will click on Close. Now next we can choose
the vertical axis. I will click on Add or
drag-and-drop a field again. And then from selector field, I will choose user rating score. And then from selector function, I will choose in average
to show the average score. Now next, you can again
change the display name so I will capitalize the
first alphabet again, average of user rating score. Next, you can change
the value format. So for example, if you
want the format to be percentages instead of
numbers, you can do that. And this will then show you the percentage of user rating. I will leave it as
default for now. Now, you can also change
the series color. For example, if I click on
Ceres color and choose blue, the graph will change to blue. I can also change which side of the axis my data will show if I click on
left, which is default. If I wanted to switch it, I can change it to right. And this will start showing
the data on the right axis. We'll leave it its default, and I'll click on Close. Now here, beneath your graph, you can see some
suggestions from Kibana. For example, if you would've
selected line chart, this is how your
graph will show. If you would've selected metric, this is how your
visualization will show up. We'll leave it as what
we're designed now on top, you can further filter this data by using
Kibana query language. For example, let's say
we only want to show data for movies that were
released after 1984. What I can then
do is I can click here and then I can type
in the release year, and then I can choose is
greater than or equal to. And then I can type in
1984 and click on Update. And now our graph
will only show data for movies that were
released after 1984. Now next, let's remove
the cake will filter, and then it can save
this visualization. First, we have to
provide it a title, so I will call it average
movie rating by release here. Now we can provide
some description. So this visualization shows average movie rating by
Lavoisier on Netflix. Now next we can add
it to our dashboard, create a new dashboard
for this visualization. But for this demo we'll
just choose none. And you can see that the add to library checkbox has been
selected by default. Now after that, you can also use tags to group
visualizations together. Now, we'll click on tags
and click on Create tag. And I will create a
tag called Netflix. You can change the
color of the tag, so I can change it to blue
and I'll click on attack. Now you can see the
Netflix tag has popped up here and then we can click
on Save and Add To Library. Now once you've done that, if you go back to
visualize library, you can see your visualization has been added to the library. Now before I let you go, I just want to show
you where tags are stored from left
navigation menu, if you go to write down
under management and then stack management
here under Kibana, you need to select tags. And this is where you can see all the tags that you have
created inside Kibana. With this, we've come to
the end of this lecture. Thank you for watching. I
will see you in the next one. Bye.
15. Telling a story using Kibana Dashboards: Hey guys, are dashboard
is made up of multiple visualizations
that tell a common story to
your business users. Before you can create a
dashboard, you need two things. First, you need a
clear understanding of your data, and second, you need clear requirements of what you want to achieve
from your dashboard. As a first step, I
have listed on screen all the fields
that are available in COVID underscore index, which is the index we
will use for this demo and what is stored
in each field. For example, the location
field contains the name of the country for which the variant information
is provided. Date provides the date
of the data entry. Variant is the name of variant corresponding to that
particular data entry. Num underscore
sequences contained the number of
sequences process and percentage sequences
contains a percentage of sequences from the
total number of sequences. And finally num sequences. Total is the total number of sequences for that
country variant end date. Now, as part of this demo, we will create three
visualizations in our dashboard. First, we'll create number of unique COVID cases
worldwide visualization. Second, we will create a
visualization that depicts the total number of sequences
for top ten countries. And finally, we'll
create a visualization that tells you the variance
spread by top country. So let's get started. Now here on the
Kibana page to create a dashboard will first go to
the left navigation menu. Then we'll go to dashboard
under analytics effects. And to create a dashboard will first click on create
new dashboard. Now from this page, when you click on create visualization to create
our first visualization, which will be part
of this dashboard. Now first, we'll change
the dataset from 15 minutes ago to 15 weeks ago. Hey guys, In this lecture, we'll create a
Kubernetes dashboard. A dashboard is made up of multiple visualizations
that tell a common story to
your business users. Before we begin, we need two things to create
our dashboard. First, we need to have a clear
understanding of our data. And second, we need to have a clear understanding
of the requirements, which is basically what we want achieve from
this dashboard. Now as part of the
first requirement, I've listed all the fields
that are available in COVID underscore index and what data each of those
fields contained. For example, the location
field contains the name of country for which the variant
information is provided. The num underscore
sequence field contains the number of sequences
processed for that country, variant and date, etc. Now, what we will do in this dashboard is we will
have three visualizations. First, we'll have the
visualization that depicts the number of unique
COVID variance worldwide. Second, we will have a
visualization that depicts total number of sequences
for top ten countries. And lastly, we will have
a visualization that provides variants
spread up countries. So let's get started to create a dashboard from my
Kibana homepage. I will click on the left
navigation menu and then click on dashboard from
the dashboard page, I will click on create
new dashboard from here, I'll first chains the data will be instead of three months ago, to be three years ago, that we are searching
on all the data in this index and I will
click on update. Once I've done that, I will click on create visualization. Now the first visualization
that we need to create is number of unique COVID
variance worldwide. So first we'll choose metric. Then under metric, I will click on Add or
drag-and-drop afield. From selector field,
I will choose variant and then I
will click on flows. From this visualization,
we can get the value of number of unique variance
across our database. Now, I will click
on save and return. After that, I will click on
create visualization again. And for my second visualization, I need to depict total number of sequences for top ten countries. So I will go back to my
Kibana lens webpage. Now here I will change the
graph type two bar vertical, and then for horizontal axis, I will choose the
field location, and I will choose
top ten values. And then I will click on Close. Now for vertical axis, I will ad or drag
and drop off again. And then I will choose the num underscore
sequences field. And then I will click on some from under
selector function. Then I can click on close. Now this gives me the total number of sequences
for top ten countries. This is all variants
of COVID for this particular country
in this date range. So I'll click on
seven return again after that for our
third visualization, I will click on
create visualization. And this time I need to create a variant
spread by top country. So I will change the graph
type two bar vertical again. And for horizontal axis, I will choose location and change it to top ten countries. Now next, I will
choose vertical axis, the field number
underscore sequences. And this time I will
choose the function maximum and click on close. After that, I will break this data down by
number of variants. So I'll click on add or drop
a field and a breakdown, and then I will choose variant. And from here I can specify the count of values
that I wanted to show. By default it is three, so it will only
depict top three. But because we have plenty
for total variance, I can also type in 20 volt here. And then for each country, it will break the data down
by all variants affect click on Close here and I hover over the data for
the United States, you can see the details of how many cases for each
particular variant of COVID, It's coming up within a
specified date range, which is last three years. Now, I will click
on save and return. And in this dashboard we can see the three different
visualizations. Now what I can do is I can save this dashboard.
I can give it a name. So COVID dashboard
and under tags, I will click on create a tag, and then I will give
it the tag name of COVID and click on create tags. I can click on Save. Now if you want to store
time with the dashboard, you can do so, but
I will leave it as default and click on Save. Now once the dashboard is saved, you can go back to dashboard. This dashboard again,
you just click on it and to look at
different dates, all you have to do
is, for example, if I change from three years ago to three weeks ago
and click on update, you will see the results
change on its own. Or we didn't had any data
for last three weeks. So that's why it
is showing notice. But if I was to say, let's say three months ago and
click on Update, I will start seeing
the data again. Now this is how you can create a Kibana dashboard with this welcome to the
end of this lecture. Thank you for watching. I will see you in
the next one byte.
16. Real time presentations using Kibana Canvas: Hey guys, Gabbana Canvas is a data visualization
and presentation tool that allows you to
put live data from Elasticsearch and then
enrich it with colors, images, and textbook create powerful presentations
for your business users. Now Canvas uses work pads
to add visualizations, image decks, etc, to
create your presentation. So let us get our hands dirty, not from Kibana homepage. To get to Canvas, you need to click on the
left navigation menu. And then under analytics, you need to click on Canvas
not to create our first work, but I need to click
on Create Workbook. And from this green, you can add your first element by clicking on the
Add element button. The first element
will use texts. Now here, you can use
this element to add some text to your presentation by using the markdown language. So I will change that
text per say welcome to the Netflix
index presentation. I will remove this
text and I will say, here is the overview of data
in Netflix underscore index. Then beneath this,
you'll see that it is referring to some information
in the demo data. For example, it is giving you the row length
and then it is giving you all the column names by using each columns attribute. So it has all the column names, but this is reflecting the
demo data for this element. So I'll first click on apply. And after I've done that, I'll click on Data and change the data source from demo
data to Elastic Search. And then I will make
sure that I'm on Netflix index and
I'll click on Save. And as soon as I hit Save, you can see that
now this textbox reflects the data from
my Netflix index. Now for the second element, Let's add the visualization that we added into our
visualization library. Click on Add Element and then I'll click on Add from Kibana. And then I will select
the visualization that we saved inside
visualization library. And next we'll move
it on the side. Now you can change
the time filter from by clicking on the time. For example, if I wanted to
change from last 15 minutes to let us say last 30
minutes, it can do that. And this panel will reflect
the change in time. Now as the third element, I will create a chart. So I'll click on Add element, I'll click on Chart, and then I'll click on metric, and then I'll move this metric beneath my
visualization element. Now for this metric, let's change the data source
again to be Netflix index. And I'll click on
Save for a minute. This will give us an error, but let's move on to display. Now, for measure, we need
a unique value of title. Now, the metric
format is numbered, but we need to change the label. So I'll replace countries
with unique movie titles. Now, this is the number of unique movie titles
in our Netflix index. Next, we can also change
the metric texts. So let's say we want the
matrix to be in pink. Why not let us do that? And we can also change the size and font and
color of our label. Let's say I want my
font to be Helvetica. So I'll click on that and
I'll change the size a bit, so I'll change it to 24. And let's say I want
the color to be green, so I'll select that. Actually this one seems better. Now I can adjust the size
of my visualization. Know next, let's add an image. So I'll click on Edit limit. I'll click on Image, and then I'll click
on image again. Now this gives us
a default image. So I'll minimize this for a bit. And I can import an image
by using the Import button. It can provide a link
to an image URL. For now, I will
click on import and then I will click on Select
or drag and drop an image. Now, I will choose an image
from my downloads folder. And let's say I want to show my face on this
presentation here. Now to view this work
pad on full screen, you just have to click on
this full-screen button and you will see all of the
information in fullscreen. I will exit out of
full-screen for now. Now this walk back is
automatically saved. If I go to Canvas and I
can see my book bed here, I can click on the
work, but again, I'll get all the data
elements who share this work. But I can click on Share
and I can either create a PDF report so I can select full-page layout and
click on Generate PDF. And now it says
your basic license does not support PDF reporting. So if you had a
production license, you would be able to
download the PDF report. Now let's click on Share again. Now, if you click on
Share on a website, it will give you
instructions of how you can share this canvas
on the website. Next, we can also
download this as JSON. So I'll click on
Download as JSON and it will download the
JSON for this work, hard to change the
name of this book, but I can go to
work by Ed settings like and change the
name to be Netflix. Walk back. Now I can change
some details like very resolution if I have any variables in this work
bear global CSS settings, etc. But for now, this is how you can create a Kibana canvas
work better with this. Come to the end of this lecture. Thank you for watching. I will see you in
the next one. Bye.
17. Overview of elasticsearch authentication realms: Hey guys, In this lecture, I will give you a
quick overview of authentication realms available
to us in Elastic Search, authentication realms
are used to authenticate users and applications
against elastic search, there are two main types
of authentication realms. The first one is internal
authentication ribs. These are internal to
elastic search and there is no integration to
an outside system. This type of room is fully
managed by Elastic Search. It can only be a maximum
of one configured realm, but internal DLM type. Now, let's have a look at an example of each
internal loam type. The first one is native
authentication realm. And native authentication
dome is where users are stored in a dedicated
Elasticsearch index. This realm supports
and authentication token in the form
of username and password and is
available by default when low realms are
explicitly configured. So remember when we were
installing Elasticsearch, the used a script to
reset passwords for our elastic and Kibana
underscore system users. Those users are configured
under native authentication. This alone is available
on the free license. File. Authentication
realm is also an internal authentication realm available on the free license. This realm is very
users are defined in files stored on each node in
the Elasticsearch cluster. This realm supports
and authentication token in the form of
username and password, and it's always available. Now after this, let's have a quick look at the
external realms. External authentication
don't require interaction with parties and components external
to elastic search, such as Microsoft
Active Directory or any other enterprise-grade
application. If you're running a
production grid cluster, it would be highly
likely that you will have to configure
an external realm. Under Elasticsearch
cluster, you can have as many external, if you'd like, each with its own unique
name and configuration, the Elastic Stack
security feature provides the following
external reference type. So LDAP, Active Directory, PKI authentication
symbol Kerberos or IDC. No, authentication realm is, uses external web server
to authenticate its users. So the users are stored in an external
authentication server, and we integrate with that
server to authenticate users. This realm supports
an authentication token in the form
of a username and password and requires
explicit configuration in order to be used. Next, we have Active Directory
authentication realm. This round uses an externally
Active Directory server, indicate the users your
users are stored in. For example, Microsoft AD users are authenticated
using usernames and passwords became realm
authenticates users using public key infrastructure, public key and private key. This realm works in
conjunction with SSL and TLS and
identifies the user through their distinguished
name or DNA of the clients extract
509 certificate. So if we use this row, you will need certificates
installed on users. Sam Hill is that facilitates authentication using
SAML 2 Web SSO protocol. This realm is designed to
support authentication through Kibana and is not intended
for used in the rest API. Now Kerberos is that realm authenticates the user using
Kerberos authentication, users are authenticated
on the basis of Kerberos tickets or IDC
is that facilitates authentication using
OpenID Connect enables elastic search as an OpenID Connect relying party
and provides SSO support. Now throughout these lectures, we will use native
authentication to connect to our
Elasticsearch cluster. For your production clusters, it would be very likely
that you will need to configure one of
these external realms. What I will do is I will provide some links to various URLs on how you can configure each of these realms on your
production clusters. With this, we have come to the
end of this short lecture. Thank you for watching. I
will see you in the next one. Bye.
18. Understanding Kibana Role based access controls: Hey guys, so far in this demo, we have been using
the elastic superuser to authenticate
against our cluster. This is fine for our demo lab, but in a production environment, you might have
hundreds of people who need to authenticate
and you will not want to share the password for your superuser to each user. Now, enter roles. Roles are a collection of
privileges that allow you to perform actions and
Kibana and Elastic Search. Now these roles give you
privilege to perform some action against your
Kibana elasticsearch index. For example, some users might only want to be able
to create dashboards. The others you
might want them to manage your indexes further, there might be a group
that needs to act as an administrator of
your Kibana instance, what you will need to
do is you will need to create roles for each of
these different user types. Each role having different
levels of access, then you will assign that role, each of those users, users are not directly
granted privileges, but are instead assigned
one or more rule that describes the
desired level of access. So as I said,
wonderful might give users only access to
view the dashboards. Now when you assign
users multiple roles, the user receives a union
of the roles privileges. Now, let's say you've
got two indices, index and index b. And you have role that
grants access to index a enroll be that grants
excess to index B. Now let's say you
have a user and this user gets assigned
board these roles. Now, because he's
got these two roles, the union of these two
roles will allow him to access data from both
index and index b. Now, elasticsearch comes in
with some built-in roles. For example, that all
beats underscore system could dance access necessary
for the beach systems, metric, bait, heartbeat,
file-based, etc. To send system-level
data to Elastic Search. Kibana underscore system rule that aren't necessary excess for Kibana system user to read from and write to
the Kibana indices. And if you remember,
when we were configuring Elasticsearch
and Kibana system, we configured the password for this user inside our
Kibana dot YAML file. Now Elastic Search also
comes in with a superuser elastic that grants access to
all areas in your cluster. So it's full system access. Now as a security measure, you would not want to login with this user after you
provision the system. And as part of
your provisioning, you will provision either
administrator and super-users that can be linked to a person that is going
to perform that role. And once you've done that, you would want to store this super users passwords in a safe place and
never use it again. Let's head back to our Kibana
instance to see how we can create roles and users
not to create a custom role. First we will click
on Create role, then we will need to specify
a role name for Rome name, I'm going to type in
administrative because this is an administrator
role that we are configuring under
cluster privileges, I will click on All. And then under Index privileges, I can either give
them access to one of these indexes or because this
is my administrator user, I will type in Start here to give them access to
all the indices. And then under privileges, I will click on All. Next. Under Kibana, I will click on
Add Kibana privilege first, we have to give them access to all spaces and then they can either provide this role customized access
to each feature. For example, under Analytics, I can give this role access
to discover dashboard, etc. Or since there's an admin role, we can give it all
privileges and click on create global
privilege on stone. Let's click on Create role. Now next let's create
a role which has only access to certain indices. So I'll click on Create rule, and I will call it COVID admin. And we will keep the
rest of the settings same as what we had
previously under indices. I will choose COVID
index and I will give this role access
to all privileges. And then same for Kibana. I will give it access to all
spaces and all features. So I'll click on Create
global privilege and click on Create role. Now next, let's
create three users. The first will be
an admin users. So I'll click on Create User. I will give this username admin. Full name would be admin one and e-mail addresses
admin at test.com. And then let's specify
some passwords. And after that, we need to assign some rules to this user. So for now, we will assign administrator role and
we will also assign the monitoring user
role because this role is needed to access the
stack monitoring feature. And then I'll click
on Create User. Now let's log out of this superuser account
and then we'll create our newly
created administrator to do further
provisioning first, and we'll type in username and then password for this user. Let me mute this warning next, let's first go to discover
and check if you've got access to both indices,
Netflix and covered. So I can see some data
in Netflix index. Now let's move on
to cover index. Now here I need to
expand my time search. So 1415 weeks ago, and I'll click on update, and I can see that I've got
access to this index as well. Now next, let's
go to dashboards. I can see the COVID
dashboard that we created. Now next, let's go to management and staff
management again. And here, users. Now we'll create a second user. Click on Create User. And this is a user which
has administrative access. It only to the COVID index. So we'll call it COVID
underscore admin, and the name would
be COVID-19 and email address COVID test.com. Then I will provide
it some password now for the privilege, I will give it the
privilege of COVID admin, and click on Create
User learned. Let me log out of this user. And here I will type in
COVID underscore admin, and then I will type in the
password for this user. Now once you are logged in, Let's go to our Discover tab, so home and then discover. And here, if you
choose Netflix index, you will see no
matching indices and no results match your
search criteria. But if I choose the
COVID index and change that date
from 15 minutes ago, we change it to 15 weeks
ago and click on update. See that I can see some data. Now this user can
also create users. So if you go to Stack
Management and users, I can create a user, but I'll show you
one other thing, because we did not
provide the privilege of monitoring underscore
user to this user account. We do not have stack monitoring under management
show up for this user. So this is an important
thing to note, even though you can provide
all privileges under Kibana, you have to specifically assign monitoring underscore
user role for the users who you want to give the privileged excess stack monitoring section
under management. Now next we'll create a
third user that only has access to dashboards and
discovered that to do that, first, let's go to Roles
and create a role, and let's call this role
COVID underscore reporting. Now we do not want to give this user any cluster privileges, so we'll leave that as blank. We only want this user to have access on COVID
underscore index. And the only
privilege we want for this user is to be able
to read from this index. Now next, we need to add
some Kibana privileges, so we'll give it
access to all spaces. And next we'll keep it as customize and then
go to analytics. And then we'll give
them access read-only on Discover and
read-only on Dashboard. And we'd also give
them access to Canvas and visualize library. And then we'll click
Create global privilege. And then we'll click Create
role to create this role. Now we go to users and
we'll click on Create User. And this time we'll call this user COVID underscore reporting. The name would be
reporting user, an email would be
reporting a test.com. We'll give it a password. So password and then
under privileges, we'll give it the COVID-19
score reporting privilege, and then we'll click
on Create User. After that, let's logout and see the excess for
this user username. I'll type in COVID underscore reporting and our
specify a password. Now once you're inside, you will only see analytics
on your home screen. That is because your user only has access to
analytics features. Now if you click on our
left navigation menu, and under analytics, we go to discover, we have some data for COVID
underscore index. However, if we switch to
Netflix index will get an error saying no indices
match better netflix index. That is because our
user does not have access to Netflix
index under dashboard. If I just click on
COVID dashboard, I'll be able to see all the
data for my COVID dashboard. Now next, next,
move on to Canvas and open up the
Netflix work by now, since this user does
not have access to any data in Netflix
index, I'll get errors. Now let's logout of this user and let's log back in
as the admin user, admin and the password. Now let's click on the
left navigation menu and go right down to
stack management. And then let's go to users
now from this screen, if you want to delete
the user, for example, if I wanted to delete my COVID
underscore reporting user, I can just select
that user and click on delete one user
and click on Delete. But I'm not going
to do that now, if I want to change the
password for this user, I'll click on it, go down and
click on Change Password. And here I can update the
password for this user. Soul cancel out of this as well if I wanted to
deactivate this user, so keep this user on the system, but prevent users
from accessing it. I can click on deactivate user. And if I want to change
privileges for this user, I can come in here and
assign it a different role. This is how you can use
users and roles to provide role-based access controls
your Elasticsearch cluster. With this, we have come to
the end of this lecture. Thank you for watching. I will see you in the next one. Bye.
19. Categorising your dashboards using Kibana spaces: Hey guys, and then enterprise. There will be multiple teams that will need access to Kibana. For example, marketing team might need access to
marketing dashboards. Operations team will need access to operations dashboards. Now this can make
things cluttered up really quickly to help
you with this situation, Kibana has a feature
called spaces. It allows you to organize
your saved objects, such as dashboards, into
meaningful categories. From our example, you will
create a space for marketing and they will create
all their dashboards inside that space. Likewise, you will
create a space for operations and
they will clear their dashboards inside
IT space marketing will not be able to see what's inside operations or even note that an operations space exists. Likewise, operations will not
know about marketing space. Now for a user to be
able to see a space, you need to specifically assign excess with that
space for that user. For example, everyone in marketing team will get
access to marketing space. Everyone in operations team will get access to operations space. And if someone needs access to both marketing
and operations, you will assign them
access to both spaces. Now Kibana creates
a default space for you and all our objects that we've created so
far in this course have been created inside
deck default space. And lastly, to manage spaces, you need Kibana underscore
admin role are equivalent. Now let's get cracking
on my Kibana webpage. I've logged in using
my admin account, admin one to configure spaces, I need to click on the left
navigation menu and then go right down to stack
management under management. So I'll click on it. And
spaces configuration can be found under
Kibana and spaces. Now here you can see I've just got my default
space for now. We'll create a new space here. So I'll click on create space, and I will name this
COVID underscore space. Now next, I can
create an avatar. By default it is
set to initials, but you can also upload
an image for your avatar, will leave it as
default initiates, but I'll change the
background color to green. Now under features,
we can specify which features are
available under the space. So I'll remove everything
except analytics because I just want to access analytics
features using this space. And then I will click
on create space. Now, once I've graded the space, we need some objects
in this space. Now, all our objects by default are inside
the default space. For now, what we'll
do is we'll copy the COVID index and COVID dashboard to
COVID underscore space. So to do that, I need to go to saved, and then I need to find COVID
and the score dashboard. And from these three buttons to the far left of that line, I need to click on it and I need to click on Copy to space. Once there, I can
choose Copy actions. For example, create
new objects with random IDs are checked
for existing objects. I leave it as default
because I know there's no objects in my COVID
underscore space. And then I need to click
on COVID underscore space, under select spaces, and then I'll click on Copy to one space. Once the copying is done, it will show how
many objects for copied and you can
click on Finish. Likewise, I need to copy
COVID underscore index shall click on that and then click on Copy to space as well. Then I'll click on COVID, underscore space it, and then I'll click on
corporate or one space. Now, if there are any errors, you will see those here. For now, I'll click on Finish. Once the objects have been
moved into this space, began assign this
space to a role. So let's score to
security and then rolls and find out COVID
underscore reported all. Once inside we need
to go right down to Kibana and then click
on this pencil icon. And from spaces where
it says all spaces, we need to remove that and then choose COVID
underscore space. Once you've done
that, you can click on Update space privileges. So this rule can only give
access to COVID space now. And then you can
click on update rule. Once you've done that, let's logout of this admin user and log back in using
the reporting user. From this screen, I will type
in the username COVID and the score reporting and provide my password and click on Login. Now here, right off the bat, you can notice that I'm no
longer in the default space. And now I'll click on the
left navigation menu. And first we'll go
to our Discover tab. Now you can see by default
I'm inside COVID index, but affect click
on the drop-down. I don't even see
the Netflix index. That is because it's not
assigned to this space. Now to view data, I need to change the
time search through 15 weeks ago and I'll
click on Update. And you can see I've
got access to data. Now next, let's see if our
dashboard is still working. So I'll click on Dashboard
and click on COVID dashboard. And you can see that I've got data inside my
dashboard as well. If somebody about this error, now next, if I go to Canvas, you can see that
I don't even see the Netflix Wordpad anymore. So Spaces is a good way to keep your objects organized
for various categories. For example, marketing can have access to only their
dashboards, et cetera. Now obviously, if you create a new object inside this space, it will automatically
gets assigned to this space and not
to default space. So if I click on the
left navigation menu and click on Visualize library. And if I click on create
new visualization, and I'll choose lens. And let's say I want the total number of
countries in my database. So I'll click on metric,
Analytic on location. Now I've got a unique kind
of location which is 64. Now I can't save
this because my user reporting user does not have any right permissions
on this database. But if I log out and log
back in as admin user, and here I'll select
COVID underscore space. And you can see that even though my user has much
more privileges, I only get to see what's
allocated to my space. So I don't get all the search and availability features, etc. I'm only getting analytics
and stack management. Now from here, I'll click
on Visualize library. I'll click on create new
visualization and lens. And then I'll change the
visualization type of metric and I'll change
the date to 15 weeks ago. And from here I'll
drag location again, see a unique count
of location is 64. And now I've got the
save button again. So I'll click on Save and I will call it unique countries. And I move it to
existing dashboard, which is COVID-19
score dashboard. And I'll click on Save
and Go to dashboard. And here I've added the fourth
item to my dashboard for unique countries or click on Save to save changes
to this dashboard. And now if I go back to my reporting user covered
under score reporting, and I click on COVID dashboard and change
the date to in weeks ago, I'm able to see the new
visualization as well. This is how you
can use spaces to categorize your
objects inside Kibana. With this, we have
come to the end of this lecture. Thank
you for watching. I will see you in
the next one fight.
20. Introduction to Logstash: Hey guys, let's talk
about log session. This section, Log
Stash allows you to aggregate your data
from different sources, transform it, and
then send it over to elastic search or any other
application of your choice. For example, if you want
to analyse the data, you can send it to Elastic
Search or MongoDB. However, if you just
want to store the data, you can send it
to something like Amazon S3 are stored in a file
on your local file system. There are a lot of destinations depending upon your use case. And I've attached a link to
an article where you can find more information about the
destinations and Log Stash. Now, before we know more
about log searches, lower-level details,
it is always a good idea to understand
some use cases. So the first use case of Log
Stash is logs and metrics. Sometimes there is
no standard way to understand your logging data. In these scenarios, Log
Stash can help you define logic so that you can transform and make
sense of your log data. Next, you can also consume
data from HTTP web service, or you can generate events by
pooling and HTTP endpoint, for example, you can
pull Twitter and get some data from
Twitter using Log Stash. You can also use log statues
transform and understand the relational or
non-relational data that is stored in
your databases. And finally, Log Stash is a common event
collection backbone for ingestion of data ship
from mobile devices, intelligent homes,
connected vehicles, healthcare sensors, and many other industries,
specific IoT applications. Now, Log Stash pipeline
has three stages. It has an input of
filter and then output. For example, you've got some
data in your data store. It could be an IoT data store or it could be an
application or a database, anything, you send that
data to Log Stash. Now, inside that
English pipeline, first you'd have to
define an input. And this is you telling Log Stash how you're
going to get this data. For example, you could
get it from a file. You could also get it
on a TCP or UDP port. It could also be sys log data. So you'll define all
debt in the input Next. Once you've got that data, you have to process it. And that processing logic is configured inside
filter filters, or how you can
transform your data. You can also use
conditionals inside filter to do some transformations only when it matches
a certain criteria. Now some commonly used filters
are grok crop has about a 120 patterns allow you
to pass unstructured data. We'll look at grok in a bit more detail in
an upcoming lecture. Mutate performs general
transformations on an event field. You can do things such as
Rename of removal field from your data and replace a field and modify fields in her events. Dropped allows you
to drop an event completely and clone allows
you to duplicate an event. Now, output is, once you've got the data
through your input, you've done some transformation. You need to define where this
data needs to go further. So output is how you pass on an event to one or
more destinations. Now this is most
commonly elastic search. If you're using log search, you want to integrate
with Elastic Search. But it can also be things like a file output or you can send it to something
like a Kafka queue. It's up to you, basically the
next destination of data, once it leaves the
Log Search pipeline is configured under output. Now before we actually start configuring
Elasticsearch pipelines, there is one more
thing you need to understand that is codecs. Codecs are stream
filters that can operate as part of
input or output. A codec enables you
to easily separate the transport of your message from the serialization process. Basically take an example
of a multi-line codec. Now, think of an
application log, which is spread over
multiple lines by default, what Locks search
will try and do. It'll try to make each line
into and Log Stash event. Now you might not be
able to understand that log file if it is
spread over multiple events. So you want all that to
go into a single event. In that case, you will
use the multiline codec. Similarly, if you want to encode or decode your data into JSON, you will use the decent codec. So codecs are filters
that help you separate the transport of
your message from the sterilization
process of Log Stash. With this, we have come to
the end of this lecture. Thank you for watching. I will see you in the next one. Bye.
21. Ingesting and transforming CSV data using logstash : Hey guys, In this lecture, we will transform CSV
data using Log Stash. We will create a pipeline configuration file
with three phases. An input phase with a file input will help
us ingest a CSV file. Then we will use a filter phase, and we will use the CSV filter
to transform our CSV data. And finally, in
the output phase, we will use two outputs. Standard output will be used
during the testing phase. An elastic search
output will be used to finally store the data into
our Elasticsearch cluster. Now once we are ready with the pipeline configuration file, we will test it by running Log
Stash. Let us get started. Here on the screen. My single node
Elasticsearch cluster. So what I'll do first is I will create two SSH connections, and then I will copy
the external IP address and try and see if I can
connect to my Kibana instance. I'm able to connect to my
Kibana instance on port 5601. Now we'll go back to
our SSH instance, and then I will click on
top on the Settings icon, and then I will click
on Upload File. And from here, from my Downloads upload the
competition rankings, the ESV, and I'll click Open. Once the file is
finished uploading, I can type in ls in my home directory to make
sure that the file is there. Now, before we can start the
transformation of this data, we will first have to
understand this data. So I will type in cat, competition rankings,
start CSV and hit Enter. So there's about 5 thousand
records in this file. Now, I'll clear out and we'll do a head competition ranking dot CSV to look at
first few records. So the first line is the header, and this line
explains each column. For example, the first
column is ranked, the second column is tier, the third column is username, and each column is
separated by a semicolon, Five thousand records, and each column is separated
by a semicolon. So now we can start writing our pipeline
configuration file. In order to do that, I will cd into slash, ETC slash Log Stash. And then here I will do an ls. Now inside Log Stash, there is a directory
called dot t. What happens is when you run
log slash as a service, what it does is it picks up all the configuration
files from inside construct d. So we will write our configuration files
inside this directory. There is another
configuration file called Locke's --sample.com. And this file is a sample configuration file
that plastic provides you. So there's trend into this file. We'll have a quick peek. Now this file has an
input that accepts some data on a port and then
without transformation, It's census data or
point to elastic search. So let's go inside cold for D and let's write our pipeline
configuration file. So I'll clear out of everything. I'll do an ls. This directory is empty. I will do sudo touch and we'll call our configuration
file CSV data.com. And then I will type in
sudo via CSV data.com. And I will close this
and here, do an insert. Now in our pipeline, first we have to
define an input phase. So I'll type in input
and then curly brackets, and then I'll move the
curly bracket down. And here we need to specify a file input because we are
ingesting data from a file. So I'll do curly brackets
again and do some formatting. Now under file input, we need to specify
some settings. What I will do is I will
open up another tab and I will type in
file input Log Stash, and we'll go into the file
input plug-in documentation. Now, on this documentation, you can read about how
this plug-in works. For example, what are
the different modes? There's a tale mode
that fails to file, and there's a read mode. You can understand how this input tracks
the current position in the file it is watching. But what we're interested in is in the input
configuration options. Under input
configuration options, you'll see all the settings that can be configured on this input. And also, you can see which settings are
required for this input. Now, at the moment, the only setting that is
required is the path setting. And this part
setting is an array. So this part setting
is needed to specify the path of the file
which you're going to read. For us. Let me open up the SSH instance. And here I'll do an ls
and then I'll do a PWD. So this is my path and this is the filename that we need
to specify for this input. I'll go back to my
pipeline configuration. I'll type in, but then is
equal to and then arrow. And here I will
specify the path. So I will type in double
quotes and then I will copy the plot and I will
paste this part here, and then I will put a slash, and then I will copy
the name of my file, and then I will paste
that here as well. Now, after this, let's have
a look at few more settings. The next setting
is start position, and it is a string, and it can be one-off
beginning or n. What we need to
find out is what is the default value
of this setting? So I'll click on this URL and it says the
default value is. And so basically it will
start reading the file from bottom if you want to configure it to start reading
the file from top, which is what we want to do. We will have to configure
it as beginning. Let's go back. Here. I will type in Start
underscore position, and the value would
be beginning. After that, I want to
show you another value. So I will go back up. Now there's a value
called sensitive path. Basically what since db is, it is a database file
that keeps track of how much of the file is
being read by Log Stash. Basically, if your file has a 100 records and log
slash has read 50 of them, this value will be stored
in sensitivity file. Now, since db path gives you the part of the sinc TB file, the default path is path dot data slash plug-ins
slash inputs. A file in our case because we've just got a single
file and we want to keep rereading that same file because we'll be testing
various options. What I will have to do is
I will have to configure sensitivity part as
slash dev slash null. That means is that since db
will be writing to none, and this will allow us to
keep rereading that file. So I will type in since
db underscore path and I will configure the path
value as slash dev slash null. Now once I've done that, I'm done with my file input. Next, I will have to specify the filter, transform our data. So we'll type in filter
and curly brackets. Now under filter, what I need
to specify is a CSV filter. So CSV and curly brackets again, now under CSV will have
to specify some settings. So let's have a look
at the documentation. Csv, fruit and Log Stash. And I will open up the
link for documentation. And here I will go into
configuration options. So first, we can
either auto detect column names or we can
specify column names. So what I will do is I want to specify the column
names manually here. So I need to specify a setting called columns, and
it is an array. So I'll move back to my filter, and here I will specify columns. And then I will type in square brackets because
this is an array. Now to get column names, I will go back to the
second SSH instance. And here I will type in head competition
ranking sort csv, and I will copy the
column names from here. Then I moved back
and I paste them. Now I need to do
some formatting. Basically what I need
to do is we need to first all of these columns inside double quotes like this. And then I need to
change the semicolon into a coma. I will do that now. Once I'm done with that, the next thing I wanted to
do is I want to specify what is the character that
separates my columns. So let's move back to
the documentation. And what we need to specify is a setting called separator,
and it's a string. So I'll go back, hit Enter. And here I will type in
separator is equal to arrow. Inside courts will
specify semicolon because our data is
separated by semicolon. Now once I've done that, I want to show you
two more settings. First, we can skip empty
columns and skip empty rows. For demonstration, I will
specify skip empty columns. We can also skip
the header field. So if you look at our data, the first row is a header. Using this skip header setting, they can tell Log Stash
to skip the first row. Now there's a caveat to that. You have to make sure
that your column names exactly match the first
row of your data. If they don't, Log, Stash will not skip the headers. So whatever's in this line, the column names have to exactly match what you specify as column name for skip
header setting to work. So let's configure that. Skip empty columns as true. And also skip header is true. Now you can add comments in your pipeline configuration file by using the hash symbol. So I will type in hash. Hash allows you to skip the first line in
the source file. Column names have
to exactly match. So now we are ready to
specify the output. To start off, we'll use a
standard output as output. Another type in
output, curly braces. And I'll move this down. And here I will type in
STD out as my output. And inside this, I will type in Kodak and Kodak
would be ruby debug. And now we are ready
to run this pipeline. I will save this file. And to run a pipeline, what do you have to do is you
have to type in sudo slash, usr slash share slash Log Stash. And then been an
insight when there is a utility called Log Stash
that we'll be using. This utility is
the complete path to your pipeline
configuration file. But because we're in
the same directory where the pipeline
configuration file is, we can just specify
a relative path. So CSV data.com and hit enter. Now, if your pipeline
was configured properly, you will get a result like this. Then your data is split
into various fields. For example, joined it goes into a field called join date, the number of
problems, metals, etc. There are some
system-generated fields. There is a message field, which is a system generated
field which contains each individual row
version timestamp. Now, this timestamp field is the time that Log Stash
ingested this data file. And we will use this field to search and explore our
data inside Kibana. With this, we can now specify the Elastic Search output it
out of this using control C, this will stop the pipeline. Now, I will edit my pipeline configuration
file again or done. I can either remove the STD up or I can just
add another output. So I will go into insert mode and I want to just
add another output. So we'll type in Elastic
Search and curly brackets. Move this stone. Now under Elasticsearch output, the first setting I need
to provide is a list of hosts where I wanted
to send this data to. Type in hosts is
equal to an arrow, and this setting takes an array. So I put double-quotes and
http localhost and then 90 to 100 because I'm
running log search locally in the same machine where my Elasticsearch
cluster is. Next, I can also specify an index on Elastic Search where this data
will be written to. So we'll type in index
and I will then give it the name of the index,
CSV underscore index. Now one thing to note with this, if I don't create an index
on Elastic Search first, this will automatically create the index on my Elasticsearch
cluster for me. So let's try that out first, I will save this file and then I will run my
pipeline again. And while this is running, if you look at my
Kibana and if I go into index management
and staff management, and then index management, there is no index here. It for this to finish, no votes the pipeline
has written there. It seems like there is some
difficulties connecting to elastic search because I've
got a message saying elastic, such unreachable will do a control C to shut
down this pipeline. So it looks like the elastic
search services running. So let's have a look at our
configuration file again. Now, the reason it failed
was because there is a typo, so Altcoin to insert
and I'll fixed my typo localhost 9200, and then I will save this file and let's run
the pipeline again. Now once the pipeline has
finished processing our data, their score to index
management are not occupied on a page and
let's reload indices. Here. You can see CSV index. It has been
automatically created, so I'll dismiss and it
has 5 thousand documents. Now, let's try and explore
this data in our Discover tab. So I'll click on
these three lines from top-left and then
I'll go and discover. But here you cannot see any data because it says to visualize
and explore data and Kibana, you must create an index button. So let's have a look at
what an index pattern is. If you remember from
previous lectures, an index is the
highest level entity that you can query against
an elastic search. Using our Log Search
Pipeline will store our data inside an index. Using Elastic Search rest APIs, we can query the data
against that index. However, an index
pattern identifies one or more elastic
search indices that you want to
explore with Kibana. To explore this data and Kibana, we need to create
an index pattern. Now, what Kibana does, it looks for index names that
match a specified button. For example, an asterix will be a wildcard that matches
0 or more characters. So for example, if our
bet on a CSV dash star, it will match all index names
which are CSV dash one, CSV dash T2 or CSV data, etc. Let's create an index
pattern on our Kibana. Here on the Kibana, I'll have to go into
stack management. So I'll click on here
under management and inside stack
management and need to go under Kibana
into index buttons. Now I will need to
create an index button. Now our index was if I go into
my pipeline and Control C and open my
configuration file or indexes CSV underscore index. So what I'll do is I will create an index
pattern with CSV. And on the right side
you can see that this index button has
matched one source index, which is CSV and
discord in days. Now, if we want to use
the global time filter, we have to specify a timestamp as I've showed you previously. Whenever let me just
exit out of this. Whenever Log Stash
pipeline runs, it automatically creates a timestamp field
called Add timestamp, which contains the time at
which the file was ingested. So since our data doesn't have any native timestamp field, we will use this field as
our time filter field. So I will go to Index
Patterns and select, Add timestamp, and then I will
create my index spectrum. Now once the index
button is created, you can see all the fields
inside that index pattern. Now that clock represent that this timestamp field is the time field that will
be used for filtering. Now, let's go back to our Discover tab and see if
we can explore this data. Now, as you can see, after we've created an index
pattern for this index, we're able to now
search for this data. So remember, an index allows you to query that data
against Elastic Search, whereas an index
button allows you to explore that data using Kibana. Now, once we are done with this, let me show you how to delete an index
button and an index. So we'll go into management
and staff management. First, I will go to
Index Management. Select my index, click on Manage index and
then delete index. Then I will go to index buttons, click on CSV and delete this. Now I wanted to show you how to create an index and
Elastic Search. For that, we'll click
on these icons again and then go to management
and then DevTools. Now we can use DevTools to run some API calls against our
Elasticsearch cluster. And this time I'm going
to use the PUT method to create an index called
CSV underscore index. And the number of charts in
this index would be one. So I can configure
some settings. And here I will type in all the fields that I
want inside this index. Click this arrow, and this
will clear this index. Now, we can go back
to our pipeline and first we have to make sure
that this index name, the index setting,
is exactly the same as the index
we've just created. Otherwise, if this
has different log, Sasha is going to
create another index and you will not get the
data stored in your index. So they made sure that
this name is same as this will escape out of this file and we'll
run our pipeline. Now once the pipeline
has finished processing, Let's go back to our Kibana. And here I will go into stack management and
index management first, you can see I've got an index. There is some storage size, but the document column says 0. So we'll have to go into index patents and create
an index pattern. And again, it'll be CSV and
this will match our index. And I will use the
timestamp field and I will create
the index pattern. And once this It's
button is created. If I go and check under
Index Management, now, I am able to see the
document going plus 5 thousand. And again, we can explore our data using
the Discover tab. I'll minimize this message and
there are 5 thousand hits. Now, let's check if our
header rover skipped. So I'll go into my second
necessary connection and I will search for a value username
inside the username field. What I'll do is I'll go to discover and I'll
write username. I will dismiss this
message, username. And I wanted this to
be equal username. And I will update what node results
matching your criteria. Now, let's pick a
valid username. For example, BSI, discover,
I'll remove that. They're typing PSI and
I will hit Update. And you can see that
the valid values or all the correct data has been stored in my
Elasticsearch cluster. And the older header
rows are skipped. The key to that was making
sure that the columns that you define exactly match the columns
in your CSV file. Now before I let you go, I wanted to show
you one last thing, which is how to send
data from Log Stash to an Elasticsearch instance with password authentication
setup on it. For this demo, I've
created a separate ELK. One machine would expect security setup for
Password Authentication. If I copy the external
IP address of this instance and go
to my Kibana webpage. It will ask me to provide
my username and password. Now I've got a built-in username and password called elastic, which is the superuser account. I'm going to use that
login to my Kibana. Now, once inside Kibana, I will click on these
three lines from top-left and then I will
go to Stack Management. And then under
security and users, I can find all the usernames that have been configured
on this system. These are all
built-in usernames. So what we'll do is we
will use elastic to integrate our Log Stash with the Elasticsearch
instance on ELK. So I will click on SSH on ELK T2 to open an
SSH connection. Now once I've got the
SSH connection open in every type in sudo,
V slash, ETC, slash Log Stash slash d. And then we'll edit our configuration
file called CSV data. And inside this
file to configure the output to go to
elastic search on ELK 1. First, I will change
the host value from local host Buddha internal IP
address of ELK one, sorry, edit all this, going
to insert and type in 192 dot 168 dot one dot F5. And now I need to
provide the username and password I'm going
to use to authenticate. So I will type in user, and here I will put in double
quotes and type in elastic, and then I will
type in password. And what I'll do is I'll put in double quotes and I'll copy
the password from my notes. Now once I've copied
the password, we can save this file and now we're ready to
run our pipeline. So I'll type in sudo slash, usr slash slash, log slash bin
and Log Stash and minus F. And I will copy all this. I will paste it
here and hit Enter. And once the pipeline
has finished processing, Begin go to our Kibana instance and understand management. We can go to index management and you can see an index called CSV underscore index with 5 thousand documents
was created. And again, I'll have to
create an index pattern. Click Create index pattern. I will type in CSV
and it has matched my index and I will use the timestamp field and click
on Create Index spectrum. Now to explore my data, I will go to analytics
and discover. And here I will have to change the index from logs
start to CSV star. And by doing that, you should be able to
explore your data. But this, we've come to
the end of this lecture. Thank you for watching. Bye.
22. Installing Apache webserver: Hey guys, In this video, I will show you how to install Apache web server
on GCP VM instance. First, we will
install the software, then we will deploy the
new index.html file. After that, we will
start Apache web server, and finally, we will test access to our assembled website. So let's get started. Here on the screen. I've got the VM instances console
up for my GCP platform. Then I will choose new
vitamin sense from Template. And after that, I will choose the web server template
that we created earlier. Next I will click on continue. After that, I will
change the region to US West One next video, go down and click on networking. Under networking,
we will make sure the network tag is web
servers and then we will need to change
the network interface will be in the web
service network. So I'll change that from
monitoring to web servers. And after that, I will go
down and click on Create. This will create
our VM instance, which we will use
for a web server. Now once the VM is
up and running, we can move to
Visual Studio Code. Now here on the Visual
Studio code console, I've got the notes to install a web server on the
left-hand side, and on the right-hand
side, I've got my SSH connection
into this web server. So first, we will
update our system. So sudo APT get update minus y. Now let's clear out next, I will type in sudo apt-get install Apache two minus
y and hit Enter. Next. I'll clear out of
this and then I will copy this command
to create a new index.html file and put some sample code inside
it to create our website. Paste it here, and hit enter. Now once this is done, I can start my Apache service. So pseudo system CTL start
Apache two and hit enter. Now let's check for status. So pseudo system CTL, status Apache to its
active and running. And now let us head back
to our GCP Console. And here I'll copy the
external IP of this machine, and I'll paste it here
and I'll hit enter. Now as you can see, we've
got our sample webpage up. This is how you can install a web server on GCP VM instance. Thank you for watching. I will see you in the next one. Bye.
23. Install and configure Metricbeat: Hey guys, We have our Elasticsearch Cluster
appended it to work. Now is the time to start
using the cluster. In this demo, we collect system auditing data such as
CPU and memory utilization, log data such as system logs, cleared some cool
dashboards like this one, again, insights into
our IP systems. Now, Elastic Search uses lightweight shippers
called beats that are installed on the
target machines and send operational data to
Elasticsearch cluster. And we can use this
operational data to create reports
and dashboards. Now one of the most
common use cases is to feed system
metric data into Elastic Search and
use it to monitor things such as CPU or memory
utilization, disk, etc. Now system metric data is factor elastic search using
a beat called metric, which is a lightweight, cheaper that you can install
on your server. Periodically collect
metrics from the operating system and from the services
running on that. So it takes these metrics
and statistics and shifts them to the output of your choice example it could be Log Stash or Elastic Search. To give you an example, let's say you want
them one interval of your Linux machine for our system metrics data such as CPU and memory utilization, etc. What do you need to do
is you need to install metric weight on the
server and enabling module system which collects all this information and
ChIP-Seq to Elastic Search. This module allows
you to monitor a lot of things that I've
listed on the screen. Each of these items individually
is called a matrix. You can either choose to enable. All of these are only the ones
that you're interested in. The ones that you're
not interested in, we need to be commented out
from your configuration file. Therefore, in theory, a
magnetic bead module, it finds the basic logic
for collecting data from a specific service such as
Apache HTTP web server. The module specifies
details about the service, including how to connect
to that service, how often to collect metrics and which
metrics to collect. Each module has one
or more matrix. It's a matrix that
is the part of the molecule that touches
and structures the data. There are a lot of
models that you can enable in metric bait
in the description of this video upward in a URL from Elasticsearch
documentation. And you can browse
it to have a look at all the modules available. And you can pick
and choose the ones that you need based on
your system requirements. Next, let's look at how
metric, which service works? Metric bead works in four steps. First, the services installed and configured on
the target machine, sick and it collects monitoring
information based on the modules that are enabled and metrics are
setup configured. This data is stored and indexed inside the
Elastic Search. And finally, we can
create cool reports and dashboards on this
data using Kibana UI. You can also use the
default dashboards provided by that succession. Here's hoping to
configure metric bid for this demo on Apache web server
will enable Apache module, which provides the status
for the Apache web server. We'll also enabled
system module, which provides system metrics. Finally, we'd also enabled
beats expect module. This Mohr buret will
help us monitor the Apache service on the ELK server will enable
elastic search expect module, which will help us monitor
Elasticsearch itself, will also enable
Kibana expect module, which will help us
monitor Kibana. Will enable bits expert module, which will help us enable other bits that are
installed on their service. Now one thing to note is
that we will use the same metric with incense
to monitor itself. This is not what you will see in a production environment. Your production cluster
might look more like this, where you have a separate
production cluster and a separate
monitoring cluster. Because in case of an outage
on monitoring cluster, you might still have
the production services being monitored through
the production cluster. And also in case the
production cluster dies, you can troubleshoot why it failed using the
monitoring cluster. Now let's move on
to our lab system. Installed metric bid. Now once metric where
it is installed, the first thing we
need to do is we need to enable monitoring on
the Elasticsearch cluster. So what we'll do is
go to Elasticsearch, struck EML files so I can sudo nano slash, ETC,
slash Elasticsearch. Elasticsearch start to Yemen. He hit go right down to
the bottom of this file. Yet how to make a comment
for a new section. And I'll name it
expect monitoring. And ended up expect monitoring. I'll copy these two settings. Expect monitoring
collection enabled as true and expect Monitoring. Elastic search collection
enabled is true. These two settings enable the
monitoring of our cluster. Copy this, and paste it here. Now let's say my final. After that, I restart my
Elasticsearch service, the system CTL, restart
elastic search. Once IT assets such as AP, we can install and
configure a metric, be prototyping sudo
nano slash ETC, slash metrics, slash metrics. Here, the first setting out
configure is library load. Or the setting does
is it looks for the module configuration
file under this part. And if there are any changes
to this configuration file, it will reload
them automatically without the need of restarting
metric beats service. I'll change this to true. Welcome and reload dot period. Now, an agenda and name, we can specify a custom
name for the shipper. By default, metric bit takes the host name of the machine that the
shipper is running on. I can type in ELK dash
one and the eggs. I'll specify that this
is the ELK machine, so I'll get rid of all this
specify tag called ELK. Under dashboards. We can either set up default Kibana dashboards for
metric bid for the shipper, but uncommenting this and
setting this true, true. Or we can use the
dashboard is Kumar. What I'll do is I'll
set this to true here. But when we do the web server and show you how to do
it through the command. Now under Kibana need
to uncomment host, but we can leave it
as local hospitals, the shipper is running on
the same machine as Kibana. Same for Elastic Search, so we can leave
it as local host. And the protocol,
let's uncomment this and change it from
STDEV.S to Azure DB. Under username, we need to specify Elastic user
and the password. You need to specify the
password for Elastic user. Now next we need to
specify logging. We'll go to the section
that's called logging. And here we'll need to
uncomment logging level, but we'll leave
it as default and uncomment logging
selectors as well. So we'll leave it as
default pulse electors. After that, we need to
copy these settings. The first setting configures
the logging to go to File. And after that we specify a
bot for the logging file. And we specify the name
of the logging file. After that, we specify how
many files do we need to keep following which we will need to specify the
permissions for the logging. I'll just copy all
this and paste here. Now finally, we need to set up monitoring for our
metric beat instance. So I'll recommend
monitoring enabled. And now let's add this to true. That'll go the bottom where it says monitoring dot Elastic
Search and uncomment. This nobody's setting does is
it picks up whatever we've specified in outputs for
Elastic Search as the output, it will pick up the
output from here. We can save this file and
start metric big service. So pseudo system CTL, start. And we can type in sudo
system CTL enable metrics, enabled metric bait as well. Now after this, we need to configure an
enable some modules. So if I go to ETC slash metrics. Here, if I do an ls, you'll see that there's a
directory called modules. I'll go into modules
or D, do an ls. You'll see that all the modules are present in this directory, but all of them are disabled
except system module. Now for ELK server, let me clear out
the Unless again. Now for ELK Solver, what we need to do is we need
to enable elastic search, expect module, Kibana expect module and disabled
system or do. So let's do that first. So pseudo metric bid enabled
Elastic Search dash expect. After that we can
enable Kibana expect. Now, we need to enable
Beta expects as well. This module will be used to monitor other shippers that we installed on the server. It would be to expect. Now, if we so choose, we can disable the
system or disabled. Now let's configure
elasticsearch expect first. Sudo nano Elastic
Search dash expect. Here we need to specify
the username and password for our remote monitoring user for
Elastic Search. I'm coming both of these
and go to my files, go to Ansible and under
credential store TM. Well, of course, all
the credentials. They'll copy the username first and the password put in
what order to use it. Let's just save it. And now let's just modify Kibana expect and do the same. Now we can tip this. Just noticed that I've disabled beta expect instead
of disabling system. So what I'll do is I'll
enabled me to expect first disabled the system model. Now let me configure
we'd expect. Now here, we need to specify the beat
system user, username. So I'll copy the use an
info for B system users. So beta underscore system. Now let me commend this
and I'll paste this, and I'll save this file. Now, metric bid should be
configured on my ELK server. So I can go to GCP console, copy the public IP
address of my ELK server. Instructing the
browser with 45601. From the Kibana homepage. I can click on menu
from the top-left. Let me just knocked my doc. I can go into stack monitoring and if I've configured
metric beat correctly, I should start seeing
some other things. Stacks on my stack. Here I need to
click on demo ELK. And as you can see, I've got
monitoring information for my Elasticsearch,
Kibana and beats. So let's click on beat
and click on ELK one. You can see I've
got my event rates. Now, I can go to
demo ELK and click on Overview under
my Elastic Search to look at the health of
my Elastic Search index, I can look at saturates, such latency, indexing
latency and rates. I can click on nodes, look at my notes, accord OneNote pitches
in green status, I can look at the CPU load and discrete space
for this node. I can also look at my indexes. Just got one index
called metric bid. Now I can go to Kibana
and click on Overview. Look at Kibana. So like here I can look at plant requests and
response time. The memory. How many instances of
maximum response time? If I click on instances, I can look at the instances of Kibana that I've got as well. Now, this is the setup
for our ELK server. Next, we'll have to
configure out a web servers. What I'll do is
from GCP console, I'll copy the public IP
address of my web server one. I'll open up new terminal
based on public IP address. Now, on the left-hand side
are called the installation and configuration instructions
for a web server. On the right-hand side, I've got my SSH connection
into the web server. So the first step is to install the public signing key for Elasticsearch. So let's do that. After that, Let's just control the transport
as UDP package. After that, we need to save
the directory definitions. Now we can do an update, an installer metric beats. So sudo apt-get update, sudo apt-get install metric, non-metric betas and stored. So let's first clear the screen. Now we can stop them
figuring metric bead. Now this configuration
would be similar to what we did for ELK summer. Let's configure library loading. First name would be web server. I've been knocked set up the
default Kibana dashboards because I want to show you how to do this through
the command line. The host for Kibana, I'll need to put in the internal IP address
of the instance. So it'll go to GCP console, copy this, and paste it here. Same for Elastic Search. Now we can enable
logging. After this. Let's enable monitoring. Finally, we can
uncomment monitoring dot Elastic Search
and save our file. Now let's start
metric beats service. So pseudo system CTL
enable the service. Now let us have a
look at the status of the service running. So next what we'll do is
we'll start enabling modules. The first module will enable on this is the Apache module. After that we enabled the beat. Expect more do. Now let us have a look at the list of enable more modules
on this machine. You can do that by
typing in pseudo. Bit. More do's list unity that record three modules
enabled on this machine. Apache Beta expect
that we've just enabled and system or
delays enabled by default. Let's have a look at the
module configurations. It clear, this first
sudo nano slash, ETC slash metric bit, more distorted d slash apache. Here, we'll uncomment
Matrix sets status. We don't need to put in
the username and password because our default web server does not have any
authentications. What I would recommend
is that we changed the value under host from 127001 to either the IP address or domain name
of our web server. This would help us identify this web server
in our dashboard. So what I'll do is
I'll remove all this. I'll copy the internal IP
address of my web server. Go back to the terminal,
and I'll paste it here. Now, I can save this file. Now let's have a look at
beat expert configuration. Here. We need to provide the
username and password so that we can authenticate
against the beats. This would be the
beach system user. This would be the password
for V8 system user. Now this bit is used to monitor other buttes
on the system. But since we do not have anything installed
on the system yet, this will not do anything. So let's just get
out of this file. I have promised earlier. Now I'll show you how to configure dashboards
through command line. To do that, you need to type in sudo metric bid set-up,
dash, dash, dashboard. Now once the
dashboards are loaded, you will see a message that
says loaded dashboards. Now let's head back to
our Kibana instance. Here. Let's first go to clusters and demo ELK
understand monitoring. Let's just go down to beets. Now, you can see that
we've got two bits, ELK one and the other one
should be an observer. So this data is from the internal monitoring
above the beach itself. Now, to get the information
about the suburb, you need to go to matrix. Here. You'll see that we've
started to get metrics, details about our web servers. So we'll click on it and
click on host matrix. You can see that we've
started together all the details about
our web servers. But after this, let's move on
to dashboards under Kibana. Look at a sample
dashboard produced by metric bait for
an Apache server. Here under dashboard search box, I will type in Apache. Then I'll open up
the dashboard named metric with Apache overview ECS. Now this is a sample dashboard. We can get stuff like the host
names in our environment, uptime of our servers, total access bytes,
busy and idle workers, different types of connections, CPU lobe, network node, etc. Now I'll let you guys configure a web
server to with this, we have come to the
end of this video. Thank you for watching. I will see you in the next one.
24. Install and configure Filebeat: Hey guys, let's install and configure file
within this lecture. File beat is a lightweight, cheaper for forwarding and
tantalizing logged data. It is installed as an agent
onto your servers and it monitors log files are
locations that you specify, collects log events from
those files are locations and forwards them to either
Elastic Search or Log Stash. It has two components. The first one is
called the harvester, and the second one is an input. A harvester is responsible for reading the contents
of a single file. Line by line. Input
is responsible for managing the
harvesters and finding all sources to read
log data from. Now, let us have a look
at how file beat works. Now once you start the file, which service it starts
one or more inputs that look into the location that you specify for log data. For example, input one, we'll look at bad luck and
any log file inside Verilog. Now for each log file that it discovered inside
this location, it starts a new hire ester. This harvester is responsible to read the log file line by line and it sends that
data to a liquid. Now this lipid aggregates
the events and since the aggregated data to
the output that you specify, for example, it can
send the data to Log Stash Elastic Search. In this demo, we'll
first configure file beat on our ELK server. There are six steps. So first we'll
validate that the logs are set up correctly
on ELK server. After that, we'll install and configure the
file boot service. After that, we'll configure metrics to monitor our
file, which service. So we have some sort of
monitoring in place as well. Then we'll start an enabled
the file beat service. And finally, we'll
enable and configure the ELK modules on our
cluster for file beat. So Elasticsearch, Kibana
and system will do now. Secondly, we'll configure file
beat for our web servers. The steps are similar. The only difference is that
the modules will configure on the 5-bit service for
web servers will be Apache and the system
will let us get started. Now here on my lab. And since I've got the
steps to install and configure file Britton Hill case over on the left-hand side, and the SSH connection into my IELTS over on the
right-hand side. What I'll do is I'll attach these steps to the description of this video so
you guys can have access to this document as well. Now on the EMACS over, the first step is to
validate that logging is configured for an elastic
search instances. So I'll type in sudo
nano slash ETC, slash Elastic Search slash
Elastic Search dot YAML. Now in this document, what I'll do is I'll go
to paths and verify that bought dot logs is set to
var log Elastic Search. And it's not commented out. If this is commented
out for you, you will have to uncomment
it and make sure that path dot logs is setup. After we verified that
we can exit out of this. Next, let's go to
our Kibana instance. So sudo nano slash TTC slash Kibana slash
keep on her know Tiamat. Now here we'll have to
go down to logging. Now by default, you'll see that Kibana logs to
standard output. So what we'll have to do is
we'd have to change this. I'll make space here. Copy all this, paste it
on the right-hand side. Now what these settings do
is first they configure logging dot destination
to be under bad log, and then Kibana file. Then we enabled log rotation and we dealt the system that we want to keep
someone log files. After we've done that, we can
save this file and enter. Now, a problem with Kibana
is configuration is that it cannot create the
file Kibana under Verilog. So what we'll do is
we'll have to do slash slash log slash Kibana, creating a new file called
Kibana underbar law. Let me just do this
with the sudo. Now after that we have
been changed ownership of this log file Kibana be owned by the Kibana user and group
sudo C-H on Kibana. Kibana slash slash lobe. Especially. Now this file is ownership is to the user
Kibana and group Kibana. Now, after we've done that, we can restart Kibana service. So pseudo system
CTL, restart coupon. Let's have a look at
the status of Kibana. Now, let's see if Kibana file
that you've just created in the var log has started getting logged data
from Kibana instance. Let's do a tail
minus f slash bed. English law, especially
as you can see, that we've started getting logged data into our
log file for Kibana. Let's exit out of this. Let me clear from here. Now, the next bit of ELK Stack installing the
server is metric beats. So let's double-check
that metric bait is configured with logging sudo nano slash abc slash
metric main.xml. This file, let's go to
the logging section. Now let's configure
logging in this file. Copy all this based here. Now after this, next, save this file as well. Let's restart metric.
Which service? So pseudo system
CTL, stack metric. But let's have a look at
status of metric bid running. Now, we've got logging
configured on our ELK cluster. After this, what we
can do is we can now install file which service? So sudo apt-get install. Now once it is installed, we can go to file dot YAML file and configure
Hybrid service. So to do that,
Let's type in sudo, ETC slash slash phobic dot YAML. Now here in this file, what we'll do is
we'll first go down and configure the
library loading. So under File bit modules, we'll go to reload dot enabled and change the
value from false to true. We'll uncomment,
reload dot period and leave it as
default ten seconds. Now we go down in gender
section and give it a tag. So since this is an ELK cluster, just give it a tag ELK. Now let's go down to dashboards and enabled file,
which dashboards? I'll uncomment this setup
dot dashboard started Enable and change
the value to true. After that under Kibana section, I'm command Kibana host URL, and I'll leave it as local
or surplus is 01 because this fiber incense
is started locally. Very Kibana instance
is all to install it. After that output
as Elastic Search. And I'll leave the host
section as localhost 9200. I change the
protocol to use GDB. Change the password to
Elastic users password, and I'll uncomment both
username and password. Now after this, I'll go to the login section and configure
logging for file read, uncommitted logging dot level. And I'll leave it
as default debug. And I'll uncomment
logging dot selectors and I'll leave it as default star. Copy all this, paste it here. Now finally, I can set up monitoring
through my-attribute, go down on this file. And I'll copy as gdb dot
enabled an ester db dot port. Now what this does is it enables metrics to monitor
file beat over STDP. Base this here. Now I
can save this file. Now since we've enabled
metric bait monitoring over port 5067 x plus check if beat expect module in metric bid is enabled or not. So to do that, let's first
check sudo Electric bid list, list all the modules and
the ones that have enabled. Now if you can see that we don't have Beta expect module enabled. So let's first enable this module because
this module is used to monitor other bits installed on the server to matriculate. Pseudo metric bid
modules enabled. We'd expect this
module is enabled. We'll have to configure this
module to watch port 5067. To do that, let me first
clear out everything. Then I type in sudo nano slash, dc slash slash
slash bead expect. In this file, what will
have to do is we'd have to first change the URL under hosts from local
surplus 6625067, because this is the port where
file is exporting data to. Now under username and password, we have to use the beach
system username and password. I didn't move that.
Change the username. Underscore system. And change the password to be tended underscore
system user password. And let me save this file. Finally, verify that metric
which services running sudo system CTL
status rhetoric bit. Now we can start the
file boot service. So pseudo system CTL. Now let us have a look at the status of file,
which service? Now the service is running. Next, what we'll do is we'll have to enable the
modules on this surface. Let me first clear out of
everything. Let me dive in. Pseudo. Modules can
enable elastic search, to enable the Elastic
Search module. After that, let me have been pseudo 5-bit
modules enabled, Kibana, enable the
Kibana modules. And last, we'll do
sudo vibrate modules enabled system to enable
the system to do. Now, we'll have to configure
all these three modules. Let's go to sudo, nano dash abc slash, slash star d slash
Elasticsearch Ottoman. In this module, you can see
that it's got some parts. By default it x the bots
based on your distribution. So because we haven't changed these parts
to custom paths, we do not need to
edit anything here. But if you have
changed the pass to be custom parts in
your installation, you'll have to come in here and do those
configuration changes. I'll get out of this file. Similarly for Gibbon
rhodamine system, based on the distribution
takes all the default parts. If you've changed the
parts, you'll have to come in here and do the changes. Now, let's go to our
Kibana instance. I'll copy the public
IP address 5601, type in the name of
Elastic user and password. Now, first, if the bid is
being monitored correctly, we should be able to see
this file breed come up under stack
monitoring, CO2 menu. Go to Step monitoring. Here let's click on demo
irrigate, and let's go down. You can see that
we've got two bits. You've got FIB under the name of EL given to.
Let's click on this. This is our file
before being monitored correctly and begin to see that you've got some
event rates to put, etc. Now after this, if
you've got two clusters, demo and never look at
the section under logs, you can see that we
started gathering logs. So we've got plenty to
information and logs and application are warming
lobes on our Elastic Search. So this means that our
service is able to capture logs from ELK cluster. Now let's move on
to nominal exit out of here K. And move
on to the web server. Hello, copy the IP address
of web server one, cohere, SSH, paste
the IP address. Next, I'll close it in sections for ***** above
Albert installation. And I'll open the ones
for the web server. And I look on them
in preview mode. Now on our website. But first we'll have to validate that logging is
configured for matrix. So let us do sudo, nano slash, ETC slash predict bit slash
might liquid dot YAML. Now here we have to go down
to the logging section. We'll uncomment logging level
has debug and select star. We leave them both as default. Copy this section. So Control C, control V. Now after this, let's
save this file. Let's restart MATLAB bit. Now once the metric
which service comes up, we can install a file object for status on the metric
big service first, let's do a clear So we've
got a clear screen. And here I can type in
sudo apt-get install. Now, after this, we can
configure our vibrate service. So let me first clear. Type in sudo nano DC
slash, slash five. Now the steps are same, so we'll go right down
configured Library load first. Next, let's just add a tag. Keep the tag as web tier, because this is a web server. Setup dashboard. Setup
dot Dashboard start enabled as true. Now after this, let's
undermined the Kibana host. Now for the UFO, what we'll do is we'll use the internal IP address
of our Kibana server. So I'll copy this and change the local host to the internal IP
address of our Q1. I'll go down to the Elastic
Search output section. Do the same thing for
host URL here as well. Change protocol
information deeply. Enter the password for
elastic superuser. Now I'll go down to
logging section, make sure logging is configured. Now finally, I'll
go right down and enabled metric be to
monitor this service. Save this file, and add a checkup Vx squared modulus enabled
for metric bid or not. So pseudo metric bit. More dues enabled. Fedex back. I can
configure this module. So sudo, nano, especially ETC, slash slash slash we'd expect. In this file. I've changed the host board to 5067 because this is fifth
file Peter is configured. I lose the beat system
username and password. I can say this file. Now let us just do
a status on metric. Metric which services working. Now let's start file-based. So pseudo system CTL file. After that, we can first status the fibers service to verify
that it is actually started. Now, once we verified that
the service has started, we can clear everything
and enabled the module. Pseudo. Beat. Models enable, enable the Apache module. Then we'll enable
this as two models. Now once this is done, we can go back to
our Kibana instance. Here. Let's just refresh step on anything on an Kibana instance. If we go down, we'll see that you've
got four beats now, so we'll click on bits. And as you can
see, we've got the 5-bit come up for our
web server as well. We can check for the status of this feed by clicking
on web server. And it should give
us event rate, throughput failure rates, etc. Now what we can do is we
can have a look at one of the sample dashboard that file
for an Apache web server. To do that, let us go to
Menu and then dashboard. Here under dashboard,
let's search for Apache. And let's click on file with Apache excess and analog ECS. Now this is a sample
dashboard that FileReader create using the excess and edit logs from
an Apache service. You can use this to look at unique IP addresses
that access to yourself with that response codes are one-time browser breakdown, etc. This is how you can
install and configure file on an E and K
server and a web server. What I want you to do
is I want you to go in and install and configure
file on the web somewhere. What you should expect is
that once you do that, you shouldn't be able to get information like
this for the absolute. And also, if you go
to step 100 thing, you'll have six beats. One metric be one file for
your second web server. With this, come to the
end of this video. Thank you for
watching. I will see you in the next one. Bye.
25. Install and configure Heartbeat: Hey guys, lesson install
and configure heartbeat. In this video, heartbeat allows
you to periodically check the status of your services and determine whether they're
available or not. Now, for an example, think that you
have a web server. And if you log into
the console of their web server and do a service check on
the HTTP service. The service comes back as running or you try
to load the webpage, the web-based doesn't load, you get an editor
on the webpage, which means your users
on our table to use your website for whatever they
were trying to use it for. Now you can use heartbeat to monitor for those
sort of scenarios. Where cockpit will do
is it will check for a status code back
from their web server. And if it's not getting
their status code back, it really showed that Web
Services unresponsive or down. Now, there are three
different types of monitors that you can
configure inside heartbeat. First, you can configure
an ICMP ping monitor. It basically thinks that particular IP address
port hostname. And if it gets a
successful ping, it shows it as up. And if it's unsuccessful, it shows that monitor stone. Next, you can figured TCP. This monitor allows
you to connect to a particular endpoint
through PCP. And you can also
modify this monitor to check that the endpoint is up by the similar custom payload
from that endpoint. Now, third, you
can also configure an HTTP monitored to connect
via HTTP to your endpoints. This monitor, it can also
be optionally configured to check for specific responses or status scores
from your endpoints. Next, there are five steps
to install heartbeat. First, you'd have to install heartbeat dash elastic package. After that, we'll configure
the hybrid service. It will configure metric bait to monitor our heartbeat instance. Fourth, we'll
configure monitors, which is SERDP,
ICMP, TCP monitor. Finally, we started enabled
the hybrid service. So let's move on to the left. Now, here on the lab system, according to instructions
to install and configure a heartbeat on ALEKS over
on the left-hand side. And I've got my
SSH connection and do my ELK server on
the right-hand side. Now one thing to
note is that we'll be installing heartbeat only on ALEKS over in this
demo. Install heartbeat. That's type in sudo
apt-get update. Sudo apt-get install heartbeat. They actually elastic
and hit enter. Once heartbeat is installed, let's start configuring
the heartbeat service. We need to type in
sudo nano slash, ETC slash heartbeat
slash heartbeat or PMA. In this YAML file. Let's first configured
live below. Change the Lord dot
integral to do and we'll leave the default reload
period as five seconds. After this. Let's first comment out all the
inline monitors. Because you'll be
using the monitor spot to configure these monitors. The path is slash, ETC slash heartbeat
slash monitors toward D and NAM and file
inside the spot. Once you've commented out all the inline monitors from this heartbeat
dot YAML file. Let's go to bags
and theater bag. He had will specify
the target as ELK. After that, let's
move on to Kibana. And under Kibana section, we'll uncomment
the host portion, but we'll leave the
default localhost 5601. Now let's move on to the
Elastic Search output and we leave the URLS
default localhost 90 to 100, change the protocol to STDP and will provide the
authentication details. Now after the action next
move on to logging section. We'll implement logging level. Also uncomment
logging selectors, but keep the default
values for both. Know, I'll copy this configuration
from my instructions, specify login to go to a file. This is configured,
we can now configure as GDB monitoring for
our heartbeat incense. I'll copy these two values. If you notice we're using port 5069 for monitoring for
our heartbeat instance, will add this board into our Beta expect configuration
file after this. So let's this file for now. Now let's first go
to our beat. Beat. Or do you sort D?
And we'd expect AutoML fan configure
monitoring on port 5069. He had a ladder comma. Now I learned the URL
http localhost 5069. I can save this file. Now first check
matrix, it is running. So pseudo system CTL, status. It repeat. Stunning. Not, let's also double-check
that we'd expect is enabled. So pseudo list and you can
see Beta expect is enabled. So that's all good.
Now, let's clear out of everything and we can start
on heartbeat insensitive. So pseudo system CTL,
start heartbeat elastic. Let's first check for the
status of the service. Make sure that it's running. And now it's time to
configure out monitors. To do that. Let's first go to ATC,
hybrid monitor stored. Here. You'll see that there
are some sample monitor files available. Will use this and
customize them. Who create our own
monitor files? First I'll type in
sudo copy, sample, ICMP dot yammer dot disabled, copy two Pi sine PWM. Now I'll do a nano on ICMP dot YAML to configure
out ICMP monitored. Now for our ICMP monitor will first have to change
the ID and name. For ID. I'll use ICMP dash
status for name. I'll use ICMP status check. Let's uncomment this
to enable the monitor. Now we'll leave the
schedule as default every five seconds
so that our hosts, that we specify under
the hosts array. Every five seconds. We'd have to specify
a list of hosts here. Now what I'll do is I'll first specify the internal IP
address of our web server one. Then just to demonstrate at
the external IP address of our web server to
193.68 dot to dot nine, which is the internal
IP address of my hips over one coma space. And I'll copy the IP address
of my web server to here. Now what I'll also
do is I'll just add a publicly accessible external
IP address, it as well. And this hybridize a
dot-dot-dot hit is always up non next relieved
the mode as default, you can disabled
by IPV6 right now, but let's leave it as default. You can also specify a
diamond or a rate will go to Tag section next and
aggregate DAG called web. And now we can save this file. Once this is done, let's move on and
configure our PCP. Wanting to do deck, let's just copy the
sample TCP monitor file. Pseudo copy sampled tcp dot gmail dot disabled
to tcp dot YAML. And now let's modify
this PC boot RPM. Now here. Firstly, we'd have to
specify the idea again. So let's begin us because
we'll use this monitor to check the status
of SSH connection and under name and type
in SSS status check. After this enables, the monitor will
leave their schedule as default five seconds. And the hosts, I specified the internal IP
address of my web server. One, specify the port, so the SSH port is 22. Now I can either
specify the port here, port specify a list of boards. If the host does not contain a port number under
the ports option. So these are the two places
you can specify ports. We leave all the other
values as default. One thing to note is
you can use check sent, and received act
custom painters. We'll move on to the tag section and add the tag for European. And I'll save this file now. Next let's move on and copy
the TV monitor pseudo sample. Http dot YAML are
disabled to Esther db.json modify SGD goo.gl, sudo nano dot YAML file here, ID I specify web
status and name could be web service check
enabled the monitor. And under hosts, I'll add my
elastic and my Kibana URL. After that, I'll have to specify indication because of you've configured expect on our system. I've been the username
and password, elastic and the password
for the Elastic user. Now, we can use the other
settings to specify a custom status check or a custom painted on
this request as well. But we'll leave these as default and move on
to the tags portion, specify the DAG has adapted
and we uncomment this. And now we can save
this file as well. Now before we move on, let's check for the status
of our heartbeat service. Once again, it's running. So now let's move on to
our Kibana, that page. Now the first thing
to check would be that heartbeat appears
a stack monitoring. So let's go to home. Then stack monitoring. Here, let's click on demo, and let's go down to beats. And as you can see, we've
got heartbeat and a bit. We can click on beat and then heartbeat to check for the status of our
hardwood instance. Now, for the actual monitors, we need to go to our homepage and then apply under
observability. Here, you can look for all
the monitors that up or down. Now, by the looks of it, all our monitors
are upright now. You can use the
filter up and down to filter against a specific
status that you want. On top, right, you
can specify the time. So for example, you can
specify if you want their data from 15 minutes
ago or 15 days or hours ago, you can also specify
the amount of time. How do you can specify
an absolutely. Now, you can also
check for boats. We've got three ports
configured 90 to 22560001. For our checks. You can
specify the scheme, ICMP or UDP or TCP to
filter out the endpoints. Or you can look for tags. We've got ELK and web tier tag. So there's click on web tier. If you do that,
you will only see the monitors specific
to that particular tag. With this, you come to
the end of this video. Thank you for watching. I
will see you in the next one.
26. Install and configure Auditbeat: Hey guys, In this video, let's install and
configure audit V8. Audit bit allows you to monitor user activity and processes
in your Linux system, communicates directly with
the Linux auditing framework. And since the data it has collected to elastic
search in real-time. That are four steps to install and configure our
debits service. First, we'll install audit bit. Second week configured
out a bit and configure the three models available
inside audit V8 service. Currently, these
modules are audit D, file integrity,
and system or do. After that, we'll configure metric be to monitor the
audit, which service? Finally, we'll start and
enable audit beach service. Now one thing to note
is that there is no library load option
in order to meet. Before we move to
the lab system, Let's have a look at each of
these modules individually. And first, let's look
at oddity module. The module receives audit events from the Linux audit framework, which is part of
the Linux kernel. What it does is it
establishes a subscription to the corner to receive audit
events as they occur. Now one thing to note is
that you might need to stop other services that
are running on your system such as oddity. And generally, for
this model to work, there are two different
types of rules that you can configure inside
ordered the module. The first rule is a file system. What this rule does is
it allows the auditing of access to a particular
file or a directory. Think of it as putting a watch
on that particular file. So you are notified
when the file is being accessed or modified. The syntax of this rule
is minus W path to the file that you need
to put the watch on minus p. What permissions
such as read, write, execute,
minus kx, ky name. The next type of rule you can configure is a
system called rule. What this rule does, it
allows the logging of system cost that any
specified program makes. The syntax of this rule is minus a action which could
be always or never. That, which would be task
except user or exclude, followed by system call, which is specified by its name. And finally, you can use field and value to further modify the rule to match
events based on specific things such
as architecture, group are the process ID, etc. Now next, let's have a look at file integrity module and
integrity module sense events when a file is changed, which means when a file is created or updated
or deleted on disk. It does that by creating a subscription with
the OS to receive notification of changes to
specific files or directories. Now, what it does is when you
first start up this module, it performs an initial scan of all the paths that
you've configured. And then it monitors
those paths to detect any changes from since
you've lost run this module. It uses locally persisting
data in order to send only events for new
or modified files. After this, let's have a look at the system or dual system or do collects various security related information
about a system. Datasets send both
periodic information, for example, a state of current running processes
and real-time changes. For example, when a process
starts or stops each dataset, since two different types of inflammation state and events, state information is sent periodically and
on system started and can be controlled by using a configuration called
State DOT period. Event information is sent
as the event occurs. For example, when you
start or stop a process. For event information or datasets use bone model
to retrieve that data. Frequency of polls can be controlled by the period
configuration parameter. Now let's move on
to the lab system. Here on my lab system, I've got the instructions
to install and configure audit big on EHS over
on the left-hand side. And on the right-hand
side I've got my SSH connection into the EHS, our first we'd have to install
audit people to do that. Let's type in sudo apt-get,
install it better. Now one thing to remember is that we are able
to do this because we've done all the
prerequisites to add elastic search repository, the signing key and
Apigee transport as GTPase package on this system already when we did the
installation of elastic search. If you haven't done that, you will not be able
to do this step. So you'd have to go back to
the step that we've installed Elastic Search and bore those prerequisites
on this machine. Now, after auditability
is installed, they can type in
sudo nano slash ETC. Slash audit V8 slash
audit with dot yammer. Open up the audit bid
configuration file. Now in this file, the
first thing we'll do is we'll configure
the modules. So first we will
configure the module. Let's go down and you'll see there's a section
called identity changes. And this section allows you
to put in firewall rules that watch files such as group
past WD and G shadow file. You can either, I'm coming this to enable these rules are, let's add a custom
rule to put a watch on our Elasticsearch and
Kibana configuration files. I've got the rule here. Minus w FASTA file is Plots to the Elasticsearch
short diameter five minus b for permissions
and then permissions are WA, and then the key is changes. So let's just copy these
two and paste it here. For key. Unless it's also
enable these rules. Now, we can also watch for unauthorized access
attempts to our system. So let's use a default
rules to do that. Now next, let's configure
the file integrity motive. Now under File Integrity, you'll see that
there are some paths that are already configured, such as been used, have been sbin or cetera. Now let's add our custom parts for Elasticsearch and Kibana. So copy paste. Now after this, what I want to show you next is that they might be fast
that you do not want to monitor the path that you specify under File
Integrity module. You can exclude those files by using the excluded
underscore file. Second, I'll just copy this setting and paste
it inside this file. Now what this setting does is
it looks for a filename or a back-end for that filename and excludes those files
from being washed. I've used an experiment that excludes the
reference files. So Elasticsearch short
reference or metric bid dot reference file
from being monitored. Next, let's have a look
at this system module. There are two types
of information so that you can get data
from system or do. First is the event information. This could be something like which package is
already installed on your system or
removed or updated. The frequency of the monitor, this inflammation can be modified using the
periods setting. Currently the default is
set to be two minutes, so let's leave it at that. Second is the state
information stage information can be monitored for
things such as host login, process, socket and
user by default. And this inflammation can be collected for a frequency
of 12 hours by default, but this can be
modified by modifying the value of state dark
period where it would, but we'll leave it
as default at large. Now, we live the rest
of the settings as default and we'll move on
to the general settings. First. Configured, a DAG. So eggs, I'll uncomment it and add a tag
called ELK. Next. Under dashboards, I'll enable
the default dashboards. After this. Let's move on to
the section Kibana. He had left uncommitted
the host URL. After this, let's move on to
the Elasticsearch output. And I'm coming protocol and change the protocol
value to be as UDP. Next, let's add in the Elastic users
authentication details. Now after this, let's
configure logging. So I'll uncomment
logging dot level, but leave it as default debug command logging
dot selectors, and again leave it as
default as whole selectors. Now let's configure the
logging to be file. Copy this section,
paste it here. Now, after this test score down and configure the
HTTP monitoring. So let's copy these settings. Stdev dot enabled an
extra degree dot port. Now for audit beat, we'll configure the
monitoring port to be 5068 because if you remember, 5067 is being used by
file beat in this demo. Now we can save this
file not after this. Let's first check that we'd expect model is enabled
in my attribute. So sudo pip list. You can see that BTEX
back is enabled. The opera this let me
just clear everything. Now. There's edited the redex back
modular configuration file. So sudo nano slash
ETC slash slash, slash module sort D slash
be expected or TML. Now in this file, Let's add the marketing
for the host. After the amount of
inflammation for file, I type in http, localhost 68. Now let's just save this. Now after this, we can start ordered bit pseudo system
CTL, start or deadbeat. Now let's check for the status of what did we have
service status. Now this has failed
for some reason. So let's investigate first
a good idea of what we'd look at the information that we've entered and make
sure the syntax is correct. So sudo, nano, ETC slash audit bit slash
audit beat or GMO. Now it looks like that exclude files
configured incorrectly. So let's just put some spaces here that it follows
the Shamatha index. There's just save this file. And let us just try
and restart our audit. Which service? So pseudo
system CTL, start beat. Now let's just do
a status again, see if it has come up. Now it looks like that was the issue and unordered which
service is not running? Let's move on to our Kibana instance to see some dashboards that it has
created by default for us. Now here, let's
just first move to stack monitoring, demo ELK. Let's go down to beats. And you can see that we've
got audit with service here. Now. We can click on wheat and then click on the
ordered room service. Look for some information
about audit V8. We can go to Home
and then dashboard. Now here we can open one of
these audit bit dashboards. So let's just open up the
file integrity dashboard. Now this is a sample dashboard that we can get out of audit V8. You can see that the initial
scan count was 1043. You can see that time the
initial scan was done. Owners and groups for our files, the file that was most
changed by file count, some file events somebody's. Now after this, let us move on and quickly
configure audit. Wait for our web server. Now here on the lab system, I've got the SSH connection
into my web server. So I'll type in sudo
apt-get install. Now after this, let's
configure out quickly. So sudo nano slash slash audit
slash audit with dot YAML. Now let's add the file watches, not on the web server. What we'll do is we don't have Elasticsearch and Kibana
European modifies. So I just copied this rule. Then. I'll modify it for
my metric beat and file beta KML
files, save it here. And I changed the past to file from Elastic Search to ETC. Slash. Changed the key to be changes. I can copy this
and paste here and changes to file a and
file video cameras. So slash slash slash
file better camera. Now since it's an important
file that you might want to watch on this silver
would be your Apache web. Several files we can
copy the same and change the path slash slash www dot slash HTML
slash index.html. And we'll change
the key Bu epsilon. After this. Let's move on to the
file integrity module. Here. I wanted that WWW slash HTML. Now we don't need
to make any changes to the system or do. Now let's move on
to eggs and agenda. Here, uncomment x setting. And let's add the tag for web tier enables the
default dashboards. Now let's go down
the hosts and adding the IP address of my AKS, our 192 dot 0.68 dot one dot 34. And let's test do the
same for plastics. Such output has changed the
protocol or Jewish or DB. And let's add in the password, go further down and add logging. Finally, let's configure,
expect wonderful thing. Not necessarily save
this file and not next add the monitoring
inflammation inside we'd expect. So sudo nano slash slash slash, more distorted d
slash b to expect. This file again, comma
space http localhost 5068. Now we can save
this file as well. Now let's start. So
pseudo system CTL. Start pedigreed. Let's look for the status
of a reboot servers. So pseudo system CTL status
or they'd been running. Let's move back to
our Kibana instance. Now here, their score to
stack one or getting first, and let's click on them. And here you can see that
we've got to audit weights. You can also now go to dashboard and let's choose a different
dashboard this time. Let's just choose the
system overview dashboard. This dashboard, it gives
you an overview of your environments such as number of hosts in
your environment, login count, login actions, user changes post distribution, process starts and
stops, sockets, opened and closed,
package changes, etc. You can also go to list off your system events
on the bottom right. Now, what I want you
to do after this is to go and install audit beat yourself on web server
to also try and explore other dashboards available
and dashboards for audit beat to look at what sort of information you can get
out of this service. With this, we come to
the end of this video. Thank you for watching. I
will see you in the next one.
27. Install and configure Packetbeat : In this video, let's discuss
how you can install and configure packet Bit to monitor network traffic between
your applications. Packet is a real-time network packet analyzer that
you can use with Elastic Search to provide an application monitoring and performance analytics system. It works by capturing the network traffic between
application servers. And it decodes the
application layer protocols such as HTTP, MySQL, etc. You can deploy it back on the same server as your
applications reside on or unexplained someone in
a Cloud environment because they underlined
network devices are not exposed to you. It's hard to deploy packet
break hundreds on someone. So in this demo, we will
deploy packet rate on our web server and monitor our HTTP traffic
using packet rate. There are four steps to
deploy back and we'd, first, we'll install
Packet week. Second week configure
packet rate and configured traffic sniffer and desktop flows for our
bucket repository. We'll configure metric. We've been monitoring
packet. And finally, we'll start and enable packet. Let's move on to our lab system. Here on the left-hand side, I've got the instructions
to install and configure packet bit on the epsilon,
on the right-hand side, I've got my SSH connection
and tomorrow October 1st, next install packet rate. So we can type in sudo apt-get install
package and hit Enter. Now once package installed, you can start configuring it to that next first
CD and print slash, ETC slash packet bit. Here, if I do an ls, you'll see that
I've got my packet, read the reference file, which I can reference
when I'm configuring my package installation
configuration file, which is packet with dot YAML. Let's do sudo. Number. Could be dirty Emma. In this file, the
first thing I have to configure is the
network interface from which I want to sniff data
from. Linux machines. Have a value of
any which means to sniff that traffic from
all connected interface. Or you can specify a particular network
interface to only target network
perfect on that interface. Now we'll leave it as
default and it next on packet flows are
enabled by default. For disabled flows,
you can set enabled as false and flow
configuration to disable it, but we'll leave it as enabled. And now it's time to configure the protocols for which we
want to monitor network flows. Now this is under packet
Bedard protocols. Now here we will monitor
network tools for ICMP, IPv4, DNS, an HTTP disabled. The rest of the
flows will have to comment them out.
I'll do that now. Now one thing to note is that the sports field is used to configure the
ports for which you want to monitor the flow
information for your web server might be running on a port test different from one of
these default ports. For example, your web server might be running on port 8,001. Flow for your web
server index scenario, what you will have to
do is either come in here and add your
specific board like this, and also delete
the rest of them. Since we are running on port 80, they move all the other ports. And I'll now move on to the
next section which is gender. Here. Uncommon tags. And I'll keep the tags appear. After this, let's set
up some dashboards. Configured, setup dot dashboard
sort enabled as true. So let me just first uncomment
this and set this to true. Will enable default dashboards for packet with Kibana instance. After that, we'll have to uncomment post-session for
Kibana and correct the URL. This will be the
internal IP address of our default instance. When you do 168 dot dot mp4. After that, let's
move on to output. And the output, the first
thing we have to do is correct the URL for my elastic output, section 921681 dot 34, which is internal IP
of my ELK instance. Next, under protocol, Let's configure the
protocol to be STDP. Now, after this, we have
to set up authentication, Elastic user, and password. Next, let's setup logging, uncomment logging level,
but leave it as before, debug logging selectors, but
leave it as default star, which means all There's
certain know Google file. I need to copy this
configuration and paste it here. What this configuration
does is it enables logging, be sent to a file. Then we specify the parameters for the file that
needs to go to, for example, the
path of the file, number of files to keep and
the permissions on that file. After this lecture setup should be monitoring
through metric bead. Copy these two surgeons,
and paste it here. Now the first setting
enables HTTP want to drink, and the second setting. That could be which port it needs to central
monitoring data. What we'll do is we'll configure this port in our beat
expect file are limited. So let's save this file. After this. Let's edit
our expert funds. So sudo, nano slash, ETC slash retributive
models certainly expect. In this file and
the host section, we have to add the URL for our
packet read military comma space quotes http localhost. In the cortex part
070. I can save this. After this, let's first check that metric
weight is running. So pseudo system CTL status. Now this is running. Now let's start our
packet with instance. So pseudo system CTL start. Let's do a status
on packet bits. So pseudo cyst to CTL
status is running smoothly. Kibana UI, to look at some of the people are dashboards
that are created by packet. From homepage of my Kibana UI. I'll first go to the menu. I'll go to stamp on
it. He had unclean. Go to section four beats. And I can see that there is
a beat board packet with, so I can click on beats. Let's click on the optimal one. I could read. You can see that it started
receiving some data. To our backend instance. Actually look at some
dashboards that have been created by packet for us in our Kibana UI
expert dashboard. Here. Let's type in packet. Then let's click on the Dashboard that says I could read
as GDP is, yes. Now here, this dashboard gives you the STDP transactions,
for example, number of transactions, strata, scores, and reports, doctrine
history, DB requests, etc. Likewise, you can also
look at net doctors. And here you can see
connections over time of posts creating traffic, posts, receiving traffic,
network traffic between hosts. Dns full of view, you can
get some DNS related data. For example, DNS query somebody's in his
requests over time, the question types in his
client and server IPs, often questions for DNS. Now next, you can also look
at if CPS transactions. You can see neck and decline
counts, transaction count, blank columns, data transfer, stripe, just type over time. Now this is the pole
of your dashboard. Can look at clients
that are trying to access your websites
in its transactions. Transactions, transaction
types, response over time, response time percentiles, error versus successful
transactions, terms they didn't seek, etc. What I want you to do
after this is I want you to deploy on web server. This will come to the end of this video and you're watching, I will see you in the next one.
28. How to deploy a multi node cluster: Hey guys, In this
lecture we'll create a 3-node Elasticsearch
cluster with two nodes being master eligible will
also separate out our Kibana application
onto its own instance. The communication between
elastic search nodes will be over SSL and the communication
between Kibana and our Elasticsearch cluster
will be over HTTPS. Now, here are some steps. First, we will create
three Elasticsearch nodes on our GCP trial account. Then we'll also
create a Kibana VM on our GCP trial account. After that, we'll
install elastic search, all the three
Elasticsearch nodes and install Kibana on its own load. Then we'll generate
an elastic search CA, and create all the certificates. This is because we'll be using
self-signed certificates. If you do not want to do that, you can generate a certificate
signing requests and get your company's CA or a public CA to give you the
certificates for your cluster. Once you've got
the certificates, will distribute these
certificates to all nodes. Then we'll configure
Elasticsearch and Kibana. And finally, we'll ensure
Elasticsearch nodes join the Elasticsearch cluster
and the Kibana node can communicate with Elastic
Search over HTTPS. So let us get started. As you guys can see, I'm inside my Compute Engine
VM instances page. Here. I'll click on Create Instance. I'll click on New women
sense from template, and I'll choose ELK template
and click Continue. I'll change the region
to US West One. I'll keep the name as ELK one. And I'll go down to
networking section, expand this and then
expand it working. I'll make sure this VM is an elastic search VPC and
in monitoring subnet. And after that, I'll
click on create. Now once this VM is created, I can create two more. Once I've got the three elastic
search instances created, I'll click on the
Create Instance button again to finally create
my camera instance, I'll click on VM instance
from Template and again use my ELK template
and click Continue here, I'll change the name to Kibana, change the region again
core down to networking. Make sure it's under monitoring
subnet and click Create. Finally, I've got my three
Elasticsearch instances and my Kibana incense ready next we'll test connectivity
to all of them. What I'll do is
I'll first click on these three lines to
change the column display. I don't want zones because I
know all the zones theorem. I don't want
recommendations in use by, and I'll click Okay, now I'll change the size of my browser window to
bring up my terminal. So now I've got my
GCP trial account, the left-hand side, and my terminal for Mac on
my right-hand side. And I'll check the
connectivity from my Mac to the external IP address
of my GCP instances. So I will type in SSH
space, lab it at, and then I'll copy
the external IP and paste and hit Enter it. Yes, I'll be able to login
to the first instance. Then I'll open up a new
tab with profile basic. Here I'll type in SSH
lab it at then the extra light we have the
second instance and it enter, yes, same thing. I'll add another tab, SSH, lab it at, and then I'll copy the IP address
of the next instance. Yes. Finally, I'll copy the external IP of
microbiology instance, open up a new tab and do SSH lab bit at and
paste it here. It took years and now I'm able
to connect to all of them. So I'll minimize this and we'll start configuring
our instances. So first we will have to install elastic search on all R3,
three Elasticsearch instances. So please go ahead
and you can follow the same steps that I've showed you in a single node deployment. Just don't configure the
Elastic Search dot YAML Yet, just install it and come back and I'll do the
same on these instances. After you've installed
elastic search on all the three
Elasticsearch notes, it's time to install
Kibana on the Kibana node. I've already installed
the public signing key and save that
directory definition. So all we need to do
on this instance is sudo APT get Update and end. Sudo APT get install
Kibana and hit Enter. Now as you can see,
Kibana got successfully installed on this machine
after Gibbon I didn't stored. What I'll do is I'll move on to the Visual Studio Terminal so I can reference my notes while I'm configuring
the services. Here on screen, I've got my Visual Studio Code
window open with my notes on the left-hand side
and the terminal on the right-hand
side and up top, I've used this plus icon
to create four tabs each into my individual
load on the GCP account. So first we'll start configuring
the ELK one machine. Here. I'll do sudo VI slash, ETC, slash Elastic Search slash
elastic search.html. Annihilate enter.
I'll go into Insert. And the first thing
I need to do is I need to set up a
name for my cluster. This will have to be same across all my last exertion nodes or
limit demo underscore ELK. Next I'll give a
name to this node. I will call this node ELK one. We can keep the path to data and log directory same, that's fine. Next I'll go into
network and I'll change the network
dot host value to 0. Dot 0 is 0. Next I'll uncomment STDP dot port. Next, we'll have to specify the seed hosts for our cluster. These are the initial hosts. Which will form
our cluster here, I'll put in the
private IP address of all my Elastic Search nodes. If I go into my GCP console, the private IPs are 192.6781681. So that's what we'll go in here. After we provided seed hosts, we need to provide which nodes
can be mastered eligible. We want ELK wonder Neil
K2 to be master eligible. So that's what we'll go in here. I'll type in EL k1 and k2. Now, that's all for this file. For now, I can save it a
blue q exclamation mark. Now once I've done that, what I'll need to do
is I need to go to user share Elastic Search and
been now once inside bin, we can do an ls and
you will see there is a utility called Elastic
Search dash 30 hotel. We will use that to
first create a CA and then generate some certificates
for a replication, these certificates will be
self-signed certificates. So let's get started first, we have to generate a seer. Do that, we'll type in sudo dot slash Elastic Search
dash sagittal. And to generate a seer, the command is C,
and we'll hit Enter. We can keep the defaults and
no password for the CEO. This file dot p12 is in part
user share Elastic Search. So I'll do an LS here to make sure this file is
there, There it is. Now CFI IS, is generated. We need to now generate the SSL certificates to be
used by Elasticsearch loads. What we'll do is we'll go back into been again and I'll hit up arrow to go to my Elastic Search Utility
command removes CA, and this time I will type insert because I wanted
to generate certificates. And to generate
the certificates, I have to provide a CA, I will type in dash, dash C, and we'll copy the name
of our CAP 12 file and put it here because the file
is under stay for location. We do not have to provide a
fully qualified path toward and I'll hit Enter when it
asks for a password for ca, we didn't provide one when
we were generating the CS, so it's fine, we can hit Enter. Now we're going to
accept defaults for the certificate file
as well, or password. Now this file should also be in user share Elastic Search. So I'll go into that
directory and do an ls. And This is our Elastic
Search certificate file. Now, this is for the SSL. So if we go back
to our PowerPoint, this is for the communication between the Elasticsearch nodes. For Kibana communication, we need to create an
STDP certificate. Now in order to do that, we can cd into been. This time I'll type in sudo dot slash elastic
search sergio table. And here I will type in STDP, regenerate the SGD
will be certificates. I don't want to generate a CSR, so I'll type in now, this is the option you use when you're using
a third-party CA, for example, your
company's CA, public CA. But since we're using the
self-signed certificates, I'll type in 0 and hit Enter. Yes, we want to use
an existing CS, so I'll type in yes, the next it wants the
path to my CASB T12 file. So I'll copy the path which
is user share Elastic Search. I'll copy it here is
Elastic Search ca dot p12. So I'll copy that here as
well, and I'll hit enter. Now there's no password
for this SCF file. Now next it asks me how long I want the
certificates to be valid. Now, I want the default, so I'll keep it as five
years next it asks if it, if you want to generate
a certificate per load in a production
setup, you might. So you'll go yes here, but for now I'll click
on No and hit Enter. Now it wants the host
names for all the nodes, or we can provide wildcards. Now, I'll show you
something for a minute. It will go to another instance. Now here we'll do sudo V slash, ETC, slash hosts to
open up my host file. Now, all the GCP instances
have an internal DNS, which looks like this. Suppose the host name that
you provided during setup, the region there m dot c, and then the project they're
in and then dot internal. What we'll do is we'll generate a certificate using
star dot internal. Third type in star to cover
everything from host name, dot region dot project. And then I'll type in
internal and hit Enter, and then hit Enter again, I'll go yes, that's correct. Now it wants us to provide the IP addresses
of all my nodes. So I'll do that. 92
dot 168 dot one dot 6190 dot 168 dot one dot 7192
dot 168 dot one dot eight. Please note, obviously, this
will be different for you. So what you'll have to do
is you'll have to go to your GCP console and look at the internal IPs and provide
these in this place. If these are not correct, your nodes will not
join the cluster. So I'll type in Enter a
Enter again to get out. Yes, these are correct. I don't want to
change any options. I'll click on No, I don't
want to provide any password. Click on Enter and I'll
accept all the defaults. And this should create a zip file in my user share
Elastic Search location. So let's go one level above
a clear and do an ls. Now we've generated one file, one certificate for Elasticsearch
nodes, and a zip file. Now let's see what's
inside the zip file. So I'll type in unzip
Elastic Search dot zip. It can't find the Unzip. So that's an easy fix. I'll type in sudo
apt-get install, unzip, know what's
unzipping installed. We'll do the same
command and attach pseudo tort clear out
of this to analyse. Now, we need to go
inside this folder. So I'll do cd Elastic Search. And unless it has the
STDEV dot p12 file. Now let's go one level above
and do analysis again, there was also a Kibana folder. So let's go into there. And whenever there is
an elastic search CA Him file. So there's an http
dot p12 file and a ca dot vm file that was created in this
is now what we'll do is we'll move all
these relevant files into my slash temp directory
so I can download all of these and then distributed
them across the loans. So I first go one level
above clear, when Alice, I'll do a copy elastic
dash certificates to copy the certificate
file first to slash TMP, and I'll have to add a pseudo. Now next I need to copy
my CFL Elastic Stack, ca dot p12 slash TMP slash. Next I need to do a sudo
copy Elastic Search slash the STDP file inside the Elastic Search
folder, tmp as well. And finally, I need to do
a sudo copy Kibana and the Elastic Search
ca dot PEM file inside that Kibana
folder to slash TMP. Now once I've done that, I need to go inside
the slash TMP folder. And what I'll do is
I'll do a CH mod and give full permissions
to everyone on these files. So elastic star, and
I'll do it using sudo and finally
pseudo CH mod 777. I'll do that for
my STDEV.P 12 file when ls minus l to
make sure that's done, it looks like it is. Now, before we download these, will move the relevant
certificate files for this particular node
into the conflict directly. Let us go into ETC. Slash Elastic Search
type in sudo SU. First we'll go into CD,
ETC, Elastic Search. Here we'll create a
directory called search. Once that's done, we'll copy the SSL certificate for
our Elastic Search, which is this certificate in user share elastic search
into the search folder, copy slash users slash
slash Elastic Search. And the certificate is elastic
certificate stored B12. And we'll copy the certificate
inside the source folder, search slash, we hit Enter, then let us do a cd into
this search folder, make sure the
certificate has been copied properly, that's done. Now, what we have to do is let us go one
level above here, will again Edit our Elastic
Search dot YAML file. So I'll do a VI
elastic search.html. And inside this
file will have to go right down to security and under security will copy some security settings
from my notes. So expects security enabled true enables the expects
security expects security transport SSL enable true enables the SSL
on the transport, we use the SSL verification
mode I certificate and we make sure that it's mandatory for the clients
to be authenticated. And finally, we will have
to provide the KeyStore and trust or part for our elastic
certificate dot pupil file, which we just copied into source folder inside Della
such conflict directly. But before we do that, because we move into
the third folder, we have to provide the
FQDN of that part. So we'll type in slash, ETC slash Elastic
Search slash search slash elastic
certificate store quit. Well, we copy that and
paste it here as well. And now we can copy all these
settings, paste them here. Before we move on, we
also have to enable https communication or
another cluster for that, we will have to first enabled
as GDB, another cluster, and then we have
to provide a pot to the SGD P.volve file. As we have done earlier. I'll provided the FQDN because
you've changed the pot ADC Elastic Search slash
slash SRB dot py file, and copy these two
settings towards the bottom of my Elastic
Search dot YAML file. And now I can save this file. Now once we've done that, I'll copy this file to
my Temp folder as well. And again, I'll do a CH more. So and so and so on. On my Elastic Search
dot YAML file inside them are this one and hit Enter and there's copy our certificate
introducer sedentary. So copy slash, usr slash
share slash Elasticsearch, elasticsearch slash STDP dot py dwell into ATC elastic
search search. And let's move into
the source folder to make sure this file is there. Now, let's move back to our GCP console window to
download all these files. Now here on the console for GCP, I'll click on SSH next
to my ELK one instance. This will open up the SSH for this instance in
a new browser tab. Now once the SSH
connections done, click on the Settings button on top right and click
on Download File, and from here, give it the FQDN. So slash temp slash elastic certificates dot
p12 and click on Download. We've downloaded the
first certificate. Now next we have to download
our STDEV dot p12 file. So slash terms slash std
P.volve is download that once. Then click on the
download file again. This time we'll be
downloading the dot PEM file. So to get the name, I'll go into my
temp directory to analyse and get the name
of my dot PEM file. Here. I've downloaded that. Now. Next we need to download our Elasticsearch
short YAML file. So I'll click on Download
File and slash terms slash Elastic Search dot YAML, and I'll click Download. Once that's done, I can close this SSH tab and open up
a new tab to my ELL k2. Now once the SH2 GLP-2 is open, we'll click on the
Settings button again, we click on Upload File, and we'll go to
the location where we've downloaded
the certificates. And the first file, one is the elastic
search certificate. And I'll click Open, and this will upload this file to home slash elaborate
training three. And the next file I want
here is the HTTP file. So I'll click on
upload again and upload the SGD P.volve
to the same location. And finally, I'll upload my elastic search.html, this file. Now once we've done that, we have to upload
these three files to the third Elasticsearch
instance as well. So I'm going to do that now. Once you've uploaded
the files to ELL K3, we have to go to the
Kibana instance, do the same thing, but
this time we just have to upload the Elastic
Search ca dot vm file. So I'll click on the
Settings button, click Upload File and the dot PEM file for Elastic Search see and
I'll click on Open. Now once you've uploaded
all these files, they've all been uploaded to the home directory
of the laboratory running user or your individual user on your Linux machines. We have to copy them
into the correct part. First, Let's go to ELL K2 here. Let's first move on to home
slash lab IT training three. And you can see if you've got an Elastic Search dot YAML file and two certificates files. First as to sudo
SU to become root. Then we'll go inside our elastic search
directory here we'll create a new directory
called search, will copy our certificates
into this new directory. So STDEV dot p12
into thirds slash. And next is the Elastic
Search certificate. So elastic certificate dot
py tool and to search, now let's go into the third
selector to make sure they've been copied and do analysis
here, that's fine. Now, we need to replace our existing Elastic
Search dot YAML file with the new file
we've just copied over. So copy slash home slash
laboratory running three slash elastic search.html into
ETC slash Elastic Search. And let's go one level
above and do an ls minus l. And here we'll have to change a few settings inside this
Elastic Search dot YAML file. So sudo nano elastic search.html will have to go down
to our node dot name, and we'll have to
change this to ELL K2, ec2, and that should be it. Now it looks like I've made
a typo and load dot name. We need this to be ELK dash, the load number 12 or three. So let me just double-check
what I've done on real k1 here on my ELK one. So I'll go to ETC, Elastic Search and I'll do a nano only last
thing, search.html. And I'll change this
to dash one as well. And I'll save this now
that we've collected, Let's save this file as
well and move on to ELK a3. So we have to do the same thing. So pseudo copy slash home
slash library training three slash http dot slash elastic search.html
to slash, ETC. Slash Elastic Search created the folder and copied all the right certificates
into its path. Now what we can do is
we can do a nano or not elastic search.html file and
we can change the load name. We can save this file. Now let's move on to
our Kibana instance. On Kibana, there
was just one file, so home elaborate training
three by the vendor list. There's only one
Elastic Search, dot PM. So we'll do a sudo
SU, It's Co2, ETC. Slash Kibana, make
a new directory called search here and copy the Elasticsearch ca dot vm file from our level training
directory to search. And now let's do a nano
on keyboard AutoML here, the first thing we
have to do is we have to uncomment
server dot port. Then we have to change the
server dot hosts to 0. And here where it says
server public base URL, we have to provide the URL, that desk Kibana
service Goneril, no, we will not do HTTPS for
our Kibana GUI interface. What I'll do is I'll type
in http slash slash. I go to VM instances, copy the internal IP of
microbiota instance, paste it here and configured
Kibana to run on 5601. So I'll type that the ones
who have done that we first have to specify are
several named for Kibana. So I'll uncomment
that role-type in demo Kibana here and here we need to specify all the Elastic Search
instances for our queries. So basically if you
just want your queries to run on a single
Elasticsearch instance, you'll have to specify the IP
address of that incidence. But because we want them to
run on all our instances, I'm going to supply all the
Elasticsearch instances here. I'll go back to my notes and I'll copy this string from here, and I'll remove this
and I'll paste it, and I'll correct diapedesis. The one's I've done that. The next thing we have
to do is we have to provide the path of dot bm file. To do that, we have to go to Elastic Search dot SSL
certificate authorities, and I'll uncomment that. And here the left to supply the parts to my ca dot vm file. So I'll remove all
that and type in ETC slash Kibana slash search slash the name of our Elasticsearch
Search dot vm file slash Elasticsearch ca dot pm. And once I've done that, I can follow save this file. Now it's time to start our
Elasticsearch cluster. Now I'll go to my
yield given machine. I'll type in system CTL, start Elastic Search, and
then I'll go to my machine. In out of here system CTO, start Elastic Search
and type sudo in front. And then I'll go to my third
machine, but the same thing. So pseudo system CTL start
Elastic Search and it entered, let's see if the first Elasticsearch instance
game of fine, seems like there was an issue. So to troubleshoot any issues when you can use general CTL minus f minus u and then the service name.
So let's have a look. It's fair to start the service. Another place you can go looking is underwear log Elastic Search. If we do an ls, there'll
be a dot log file. So let's do a cat on that. It says excess night to
the certificate file. So let's go back to our
Elastic Search folder. Let me clear everything here. Let's do an ls minus l
r certificate folder is rude Elastic Search score. Insert there and do a, let me S minus L. Our
Elasticsearch certificate has permissions r, w, and then nothing
for the group. Now we might need
to change this, so we'll do CH mode. And for now I'm
just going to type 777 and Elastic Search
certificate dot p12. And I'll do the same
thing for my HTTP. Or to figure it out for
a production deployment, you might need to look at the permissions on
the certificate, but for now, let's just try and start the service with
these permissions. It looks like it
started find this time. So let's do a system CTL,
status Elastic Search. Notice that's done. We can go to the
other instance and those CDC Elastic Search as to sudo SU here
Let's draw ETC, Elastic Search and
then ls minus l and look at the source folder
unless coincide that. And let's look at the permissions
on the certificate now, look at the status
on this machine. So this one is running. Let's have a look at the
data's on the third machine. So as you can see, if there's a permissions issue, your service might not come up. If I go back to my
first instance, all you have to do
is if you run into any trouble starting up this
Elastic Search service, you can go to where logs Elasticsearch to analyse
and look at this file. And as I've shown previously, it will give you a hint of where the error is done for now, I've also shown you how
to change permissions on various files to
sort out any issues. Now, as you can see, with a permission of 755, this certificate
loaded correctly. So probably what
we have to do is go to ATC Elastic Search
and go into search, change the permissions to 755 on this Elastic Search
certificate as well. And let's try and do
the same thing for our STDP Bidwell certificate
unless try to restart the service dollars to
a system CTL status, plastic search is running. Let us not clear out. Next, we have to generate the credentials for our
Elasticsearch cluster. We go user share Elastic Search and been inside
benefit analysis, there is a setup passwords file, so we'll do slash Elasticsearch
set of passwords. And I'm gonna do auto to
auto-generate all my passwords. Go, yes, and it has generated the password
for this cluster for me, I'll copy these out into files. I've got this. Now before I do anything, I need to go to my
Kibera instance and then quarter my
keyboard no dot YAML file. I'll do sudo SU Kibana file. And here I will learn
on Kibana rhodamine. And in this file I'll provide the credentials for my
Kibana system user. So this way we'll use
password authentication for our Kibana to communicate
with Elastic Search. Copy this, and save this file. And let's just start Kibana. Let's do a status on
this service is vector. Now what we can do is let us go back to our
Elasticsearch instance, yellow given, and let's
get the status of this cluster to make sure all
three nodes have connected. Illustrator. So I'll
go to my notes, copy the curl command and add the username and password
for Elastic user to this command minus
u elastic and then provide the password
for this user space. And I'll hit Enter and
change the STDP to HTTPS. And I figured this error when
you try and connect through curl that there is a self-signed certificate
and certificate chain. All you have to do is you
have to tell curl to ignore a self-signed certificate
will go right at top right after Carl and
minus k and hit Enter. And here you can see that
our cluster has three nodes, is called Demo ELK and the status of this
cluster is green. This is a positive response. So what we'll do is we'll
copy the public IP address of our Kibera instead
and see if they can get to the Kibana
UI using this does exert a one and the
Kibana UI is loaded. So what we'll do next
is we'll try and see if we can login
using the elastic user. Copy the password. Elastic band provided the password
or whatever here. Now click Explore on my own. So as you can see, I didn't
solution was successful. This is how you can create
a multimode ELK cluster. With this, we have come to
the end of this lecture. Thank you for watching. I will see you in the next one. Bye.
29. Overview of elasticsearch nodes: Hey guys, in this lecture, Let's have a look at an
Elasticsearch cluster in a bit more detail. And Elasticsearch
cluster is a group of one or more node instances
that are connected together. Now what is the Elastic
Search and load? An elastic search node is a single server that
is part of a cluster. And what that means is that all these nodes
inside the elastic or short YAML
configuration file will have the same cluster
dot name setting. So all nodes that have the same cluster
dot name setting as same can form a cluster. A node stores data
and participate in the clusters indexing and search capabilities as a node
joins or leaves a cluster, the cluster automatically
reorganizes itself to evenly distribute the data across all the available
nodes in that cluster. So in case of loss of a node, you do not lose
the data and it is available on all
the other nodes. Now, if you are running a single instance of
Elastic Search, this type of cluster
is fully functional, but in case you lose your node, you will lose all your data. So if you have high availability or fault tolerance requirements, you are better off
with the cluster. Now, nodes in an
Elasticsearch cluster can have multiple roles. This is specified by Node.js goal-setting in your elastic search
strategy YAML file. Now here on the screen, I've given you a
brief summary of all the node roles that are
available in Elastic Search. And now let's have a look
at each one of them. First, let's have a look
at mastered eligible nodes are master node in an
Elasticsearch cluster is responsible for lightweight
cluster-wide actions such as creating or
deleting an index, tracking which nodes are
part of the cluster and deciding which charts to
allocate to which nodes. So basically what
data gets stored onto which node it is important
for cluster health to have a stable
master node because the muscle node performs all the cluster
management activities. And if your master
node is more stable, your cluster cannot figure
out what's going on. Now, there are two types
of master eligible nodes. You can have a
dedicated master or a voting only master now
are dedicated master node, as the name suggests, is a dedicated node which acts as a master node
for that cluster. Now this type of setup is beneficial if you have a
very large cluster and your cluster management
tasks are so much that you need to provide a
dedicated muscle norm for smaller clusters, you probably do not need
a dedicated master node. You specify which node is
a dedicated master by only providing the value master
in the node dot roles value. This node will not act
as any other node. Now, a second type of master eligible load can be a
voting only master node. Now, this type of node can participate in a
master election, but it can never
be a master node. So it can provide it's worth when a master is being elected, but it will never throw its
name to be a master node. Now next, let's have
a look at data norms. Datanodes hold the shirts that contains the documents
that you've indexed. So they contain a piece of data that you've
stored on your cluster. Datanodes handled data related
operations such as create, read, update, and
delete operations, such an aggregation operations. These operations
are input-output, CPU and memory intensive. So it is always a good idea to monitor these resources
on these data nodes. And when they become
overloaded to add more data nodes to your
cluster to even out the load. Now, you can have
a content DataNode that accommodates
user-created content. And they can idempotent
operations like grad or create, read, update, and delete such an aggregations
on your cluster. Now, data nodes can be
configured in various tiers. So let's have a look
at each one of them. Now to give you an example, let's say your data's recently
arrived in your cluster. Let's say you've got
some CRM data that's gone into the cluster
and it's just coming. Obviously all your
sales associates or anyone else who
needs that CRM data. We'll be looking at this
data as soon as it comes in. So you expect a high frequency of search for this sort of data. You will store it
in the hot tier. So this is data that just came in and it's being
frequently searched. Now next, let us say that data has become probably
three months old, so your client list has
become a bit stale. You might need to
do some searches on it as it was when the
data is just come in, you will store this type
of data in the warm tier. After that, you
have equality here. This is when the data's become older and has been
rarely such support. You will store this type
of data in the cold tier. And finally, when the data, so all the days only ad hoc once in a blue
moon type of searches, then you might want to store this type of data
in the frozen tier. One thing of note about the
frozen tier is that you can only store partially mounted
indices in the frozen tier. So basically what
happens is that the data in this tier is not stored
in the local cache. There is a shared geisha
among the cluster, and this type of data is
stored in that shared cache. So obviously the
searches becomes slower. Now the main reason for
having this type of architecture is so
that for example, for important nodes, you
can provide more resources, more CPU, more memory, etc. Then, for example, a one-node, but a one-node will have
more resources than a cold node and less
resources than a heart node. And likewise, according node will still have a local cache, but this is probably a slower
system than a well-known. So this way, you can specify cheaper resources to the
tiers as your data ages. Now, to specify each
of these data tiers. For example, to specify a node
as being on the hot tier, you will specify under
naught dot roles, the value of data
underscore Volume. And for your volunteer
and courtier, you will specify data underscore woman data underscore cold. Now next let's have a
look at the ingest node. And ingest node can execute
pre-processing pipelines, which are made up of one
or more ingest processors. After that, we have
coordinating nodes. Now coordinating nodes behave
as smart load balancers. What these nodes do
is let's say you have a large cluster and you have a large amount of data
coming into your cluster. These nodes can act as load balancer and benefit
the cluster by offloading the coordination
portion of the task to some specific nodes so that your data nodes are
not overloaded. Now, next, you have
remote eligible lobes. What these nodes can
do is for example, you have cluster
one and then our remote cluster
called clustered to. Now you can enable Cross cluster search
between these two clusters. And you can define a node
as a remote eligible load. So the queries
that hit this node can perform a search
on the remote cluster. So this way, you can combine
the data in two clusters, but the searches on a remote
cluster will be slower. Now next you have
machine-learning nodes. Now these nodes run jobs and handle machine-learning
API requests. And you can specify this node by configuring node rule as ML. And finally, you have
transformed nodes that transforms and handled
transform API requests. You can configure these Bye configuring load dot
role as transform. So these are the types
of nodes that you can configure in your
Elasticsearch cluster. With this, we have come to
the end of this lecture. In this lecture, I've tried
to give you an overview of different node types in
an Elasticsearch cluster. Wherever will do
is I will attach some links in the
description of this video. So you can study a bit more on the different types of nodes
in an Elasticsearch cluster. Thank you for watching and I will see you in the next one. Bye.
30. Whats new and elasticsearch v8.x installation : Hey guys, elastic.co, the parent company
for Elastic Search, has recently announced
general availability of Elasticsearch
version eight dot x. This version has some
breaking changes and the installation steps
are also a bit different. Let's have a look at these now. Here on the screen,
I've got the what's new in page opened up from
Elasticsearch documentation. First of all, it introduces several breaking changes to
Elastic Search rest APIs. But what they've
done is that allowed for 7 X compatibility, which should give
you enough time, prepare for an upgrade. Next, security features are enabled and configured
by default. When you install Elastic Search, it should already create some self-signed
certificates and include some settings for security in your Elasticsearch
stroke YAML file. After that is security
on system indices. In version eight dot 0, even the elastic
superuser does not have native access
to system indices. To provide access to
a system indices, you need to configure,
allow underscore, restricted underscore
indices permission to true for a particular user. Now, after that, there are
some improvements in search, storage, indexing, and
natural language processing. Next, let's have a look
at the release notes. Now on the release note page, you will see some known issues, which is basically uncertain. Linux platforms, the
Elastic user password and the cabinet enrollment token is not generated automatically, but it gives you a list of commands that you can run to generate these after
the installation. Then it lists all the breaking
changes in this version. Now if you are already
supporting a production cluster, I will recommend you to
go and look at each of these changes to make sure there are no surprises
after they upgrade. With this, we can
move on to installing Elasticsearch eight
dot 0 onto our VMs. Now I'm inside my GCP
platform and VM instances, I've got an ELK machine where we will install
Elasticsearch np.zeros. I'll copy the external IP
address of this machine. And then on my
Visual Studio Code, I've got some steps
on the left-hand side and my terminal on
the right-hand side. So I'll type in SSH space, lab it at, and then the
external IP address. And I'll click Enter
here, type in yes. And once inside the
machine I will type in sudo apt-get update
to update the system. Then I'll clear out of this
and type in sudo apt-get install W get to install
duplicate on this machine. Once WE gated and stored, we can start with the
installation of ELK Stack. So first, I will download and install
the public signing key. So I'll copy the
command and paste it inside my terminal
and hit Enter. Now what I'll do
is I will include all these commands with the
description of this video. So you guys can have access
to these commands as well. Next, we will install sbt
transport HTTPS package. After that, I'll clear my screen and next we will save the
directory definition. One important thing to
note is that when it says eight dot x for 7.6
packages in this place, it will say seven dot x. There is nothing
major in the URL. Just a small change
from seven to eight and paste it here. And this should download
the Elastic Search Adx repository definitions
onto this machine. Now next, I'm ready to install my ELK Stack
on this machine. So I'll copy this command and I'll paste it here,
and I'll hit Enter. Now, during the installation, you will see some
information popup under security auto
configuration information. Under this heading, you will see the built-in superuser
elastic password. So under this line that
generated password for elastic built-in
superuser is this. You need to save this password because we will
use this later on. I'll go down in my notes. I will save it here. Now after everything
is installed, I will clear out of my screen. Now from here we will first
configure our Elastic Search. So pseudo via slash ATC, slash Elasticsearch,
elasticsearch start ML. Now on this file, first we
will change the cluster name. I will call it demo ALK. Then I will change
the node name, which I will call ELK one. Next I will go down to
network dot host and change the value from the default
value to 0 dot 0000. After that, I'll
uncomment the port, but I'll leave it as
default 90 to 100. Since this is a
single node cluster, we do not need to
do anything under discovery up until
http dot port. The configuration was
similar to what we had to do for 7 x from a
vertex onwards, you will start seeing a
section called Pickin security auto configuration in the
Elasticsearch short YAML file. This is because as
I told you earlier, Elasticsearch now by default includes some security options. If you can see here, expects security is enabled by default. Security enrollment is
also enabled by default. It has created a
KeyStore path for HTTP communication and asserts and the file is http dot p12. It has also created mutual authentication,
keystone and terrestrial. Now we leave all the
settings as default, and we'll save this. Now here, if I go into. My Elasticsearch directory. So sudo SU, and if
I do an LS here, you will see there is a
directory called certs. And if I go into search
and doing a list, you will see the C assert, which is STDP
underscore CA dot CRD, that transport certificate under transport dot p12 and the HTTPS certificate
under http dot py 12. Now we can start the
Elasticsearch service system, CTL, start Elastic Search. Now once the service
has started, if I try and do a curl, just like I used to
do for 7 x system. So curl minus x GET http localhost or 9200 underscore cluster
and held pretty. And if I do an enter here, I'll get an error because a
cluster is actually on HTTPS. So I will need to first
change from HTTP to HTTPS. But even when I run this,
I should get an error. It says SSL certificate problem, self-signed certificates
and certificate chin. To overcome this, I'll hit the up arrow and
just before x get, I will type in minus
k and hit Enter. And this time I've got the missing authentication
credentials. Now, we have not yet set up the building user
passwords on this machine, but by default, Elastic has setup the elastic
user's password for us. So I will copy this password
and I will hit the up arrow. And after the minus p option, which was to ignore the cell send certificates
every type in minus u space elastic
and then colon, and then I will provide the
password put Elastic user. And now if I hit Enter, I should get the
cluster of health. So as you can see, the
status of cluster is green. With the self-signed
certificates, you need to use minus
K option by default. Now you have to
use Elastic user, get the cluster health
for the first time. Once you've done that,
let's clear out of screen. Now before we configure Kibana, we have to set up
the password for Kibana underscore system user. So first I will go into usr, share Elastic Search been, and I do an LS from this bin, I will use the Elastic Search
setup passwords utility. From here, I will use the Elastic Search
reset password utility. So I will type in dot slash Elastic Search reset password. And I will type in
minus u group tell the system which password reset. And I will type in
Kibana underscore system and hit Enter and never
type in nearby here. And I've got the password for my Kibana underscore
system user. It'll copy this and I will
paste this here as well. Now once I've done that, I can now configure Kibana. So pseudo VA ATC Kibana,
Kibana dot YAML. And again, I will
uncomment silver dot port, uncomment several dot host and change from local
host or zeros 000. And next uncomment sidebar
dot public base URL. And here I will type in http, and now we need the internal IP address
of our Kibana instance. I'll copy the internal IP pasted here and
provide the port. Then we can configure
the self.name. And here I'll type in
demo, dusty banner. Next we'll go down to
system Elastic Search. And under there I will provide the value for Elastic
Search username. Keep on it underscore
system by default. And then we'll have to
provide the password, which we just generated. Copy this here and paste it. And now if you go down
under System elastic, such optional because we're using self-signed
certificates for this demo, I will uncomment Elastic Search
dot SSL verification mode and change the value
from full to none. This will ignore any warnings for self-signed certificates. Now another thing
to note is that from version eight
dot X onwards, Kibana by default logs
It's logs onto a file. In some previous versions, the logs for Kibana by default, but only available
on standard output. So this is also a nice change. Now let's save this file
and let's start Kibana. So sudo system
CTL, start Kibana. Now let us get this status. It's active and running. Let's go back to our instance, copy the public IP address, paste it into a browser, and then provide the port 5601. Now I'm getting a Kibana
server is not ready error. So I'll go back
to my deployment. And here I'll open up
my YAML file again, and I will look for errors. Now inside my
gibbon or DML file, I forgot to put
in the host value for elastic search for
version dot eight. What you have to do is
you have to come in here and change
from HTTP to HTTPS. Now I'll save this file again, and let's restart Kibana. Now once the service
has restarted, Let's check status
active and running. We've got the message saying HTTP server running ads
zeros, zeros 005601. Now let us go back to the
browser. Let me refresh it. Now. We've got the
login page for Kibana. So I'll type in
elastic here and I will copy the password
for my elastic superuser, go back and put the
password in and hit Login. Click never, and we are in. This is how you can install Elasticsearch
version eight dot 0. The steps are a
bit different from Elasticsearch version 7 x, but they're not that different. Most of the settings are
saying, if you are careful, I'm pretty sure you'll
be able to confidence least set up your own
clusters at this, we've come to the
end of this lecture. I will see you in
the next one. Bye.
31. Installing elastic-agent and analysing apache logs: Hey guys, there are
two ways to send data from your servers
to elastic search. First, you can use beats, which are lightweight data
shippers that send data to an Elasticsearch
cluster for each use case, there is a specific bid. For example, for matrix data, you use metric bid for file data such as log files,
you use FileReader. Now this makes it a
bit cumbersome to manage because depending
on your use case, you might need to install
multiple bits onto the same VM. Now the second method
is elastic agent. Elastic agent is a
single agent for logs, metrics, security and
threat prevention data. Because this is a single agent, it is easier and faster to deploy monitoring across
your environment. Elastic agent makes use of
integrations and Kibana, which provide an
easy way to connect Elastic Search to outside
systems and services. For example, if you need to collect logs from
Apache web server, you use the Apache
integration to instruct the elastic agent to collect that data from
an Apache server. These integrations are
managed through Kibana UI. Now, all communication between the elastic agents and Elastic Search happens
through the fleet server. Now this fleet server
is a component that is used to centrally
manage all elastic agents. This is how it works. If you create a
policy in Kibana, whenever this policy is created, it is saved in Elastic Search. Elastic agent will generate an authentication key
or enrollment key, which it will use to authenticate against
the fruit server. Once it does that, it will ask the flood
server for a policy. Now this flipped server
will get this policy from Elastic Search and then provide
it to the elastic agent. This is how it works when a new agent policies
created in Kibana, it is saved onto Elasticsearch. To enroll in the policy, elastic agents send a
request to the fleet server using the enrollment key
generated for authentication. Fluid server then
receives this request and gets the agent policy
from elastic search. And then it shifts this policy to all agents enrolled
in that policy. Elastic agent then uses
configuration information stored inside this policy to collect and send data to Elastic Search. Now, take an example. Let's say you got Apache
installed on this agent and MySQL installed on
this agent here in Kibana, you will assign a
policy of Apache. This agent will enroll in that policy with the
fleet solar flare server will then send this policy
to this agent which will tell it to collect
Apache server logs. Once it's got the policy, it can send those logs and
other data onto Elasticsearch. Now, this agent which
has MySQL on it, will not be enrolled
in debt policy. You will assign some other
policy to this agent. This way, you can have
different servers sending different data layer
Elasticsearch Cluster. Next, there are two methods
to deploy elastic agents. First, you can manage
them by a fleet. This allows agent policies and lifecycles to be
centrally managed. In Kibana, There's
another option of self-managed elastic ages, which is an advanced
use case where you have to manually manage each agent. For this lecture, we will
use fleet manage agents. Now here are the
configuration steps. First, you configure
elasticsearch short YAML, then you install a fruit server. After that, you install
an elastic agent. Once you've installed the
agent onto the servers, then you configure
an agent policy. And finally, you validate
that the data has been sent correctly by looking at
dashboards and other metrics. So let's get started. Now here on the screen, I've got my single
node elastic server. So first I will have to
create a fried server. So I will click on
Create Instance, then I will click on New
and VM instance from template choose ELK,
and then continue. I will change the
name to flip server will change the region
to US West One, and then go down
under networking. I will make sure that I'm inside the elastic search VPC
and monitoring subnet, and then click on Create. Now while these fruits
or is being created, we will click on Create Instance again to create our web server. So I'll click on new VM
instance from template, choose the observers
and hit Continue. Here, I will change the
region to US West One. You can keep the name as is, and I'll go down to networking. And the networking,
I will change the network interface from monitoring to web server subnet. Then I click on create. Now once we've got the three
virtual machines ready, we can start with
the installation. Now I'll move on to my
Visual Studio Code. Now inside my
Visual Studio Code, I've got my notes to
install elastic agent on flood server and on the web
server on the left-hand side. And I've got my terminal into all three machines on
the right-hand side. So terminal one is EL given, terminal two is web server and terminal three is
my fruit server. So let's get started now first, we have to go to the ELK server. So I'll move on to terminal one. And on the ELK server, we have to add these two settings into
our Elasticsearch struck YAML configuration files
several typing sudo VI slash, ETC, slash Elastic Search slash Elastic Search dot
YAML and hit Enter. Now inside this file, you will see that the
first setting expect or security dot enabled is true. I will go into insert mode. And here, just below expected
security dot enabled, I'll paste the setting
expected security dot, dot API key enabled as true. So it'll copy this
and paste it here. And once I've done that,
I can save this file. So escape and I will
write to this file, and now I can start
my Elastic Search. So pseudo system CTL,
restart Elastic Search. Now once the service
has restarted, Let's start as a service. So pseudo system, CTL, status, Elastic Search, make sure the service
is active and running. And now let's move on
to our field server, which is on terminal three. I'll move to a fluid
Server on this machine first we have to download
the elastic agent. So I'll copy the
curl command and I'll paste it on the terminal
to download elastic agent. Now once the elastic agent
application has downloaded, we need to extract the tar file, copy this star
minus Z F command, and paste it on the
terminal and hit Enter. Now next, we'll move on
to the Kibana webpage. So I've got the
Kibana web page up here from the left
navigation menu. I will go down under management. I will click on fleet, and then I will go to Settings. Now here I will click on Actions and the outputs
will leave the name as default and type would be elastic search because
we want to send out fluid servers output to
Elastic Search and the host, I'll change from HTTP to HTTPS. And then I will replace
the local host value with the private IP address
of my ELK instance. So I'll copy it from here
and I will replace it. Now next, we need to go back
to our Visual Studio Code, and then we need to
go to Elastic Search. And on the
Elasticsearch instance, I need to do sudo SU
first and then CD EDC, Elastic Search and then search. Now if I do an ls
in this directory, you will see that
it has my CSRP. So let's get this certificate. So get a DB underscore
CA dot CRT. Now once you've got the output of the cat command
for CA certificate, what we need to do
is we need to update the CA certificate under the
advanced YAML configuration. So this way, we are telling
the system that we use this CA certificate
to establish trust between our fluid server and
they Elasticsearch server. You can do it in two ways. You can either use the
CA trusted fingerprint or you can paste the CA
certificate for this demo, we will paste the
CA certificate. Now, the way you
do it is you use a setting called SSL
certificate authorities, and then you paste the ESC a certificate and let
me just expand it. So this is an example
of the screen. So under SSL certificate
authorities, copy this certificate
for your machine, which would be different
to what it shows on mine. So let's just copy
all that and I'll replace this one and
I will paste it here. Now you might need to update
the formatting a bit. Now, let's just copy this
whole command and move to our advanced YAML
configuration on my Kibana page and paste
this here like this. Now once you've done that, we can save this, save and apply settings
and seven, deploy. Now after that, we
need to go to agents. Now on the agent's page, it will say that flips over is required before we can
start enrolling agents. So we'll do that now. Now under Adafruit server will leave the agent
policy as default. We've already downloaded
the fluid server on our fleet Server VM, so we can skip that. Under choose a deployment mode, we will use QuickStart
for this demo. What this does is it
allows the fluid server to generate self-signed
certificates for your production system, you will need to choose
production and provide your own certificate in your
fluid server configuration. Now next, under step four, we will need to provide the
URL of our fluid server host. So HTTPS, now we need to go to VM instances and copy the
internal IP address of our fruits server and paste it here and relieve the port as default Etudes 0 for this
demo and click on Add host. Now once you've added the host, we need to click on
Generate Service token. Now, after that, under step six, it gives us the command
to start our flips over. It will copy this command and move back to our
Visual Studio Code. And I'm on the
fleets, our terminal. But before we can
paste this command, we need to make some changes. Now here I've got the sample
command that will work. So I'll paste the
command that we copied from Kibana instance. From this command, we will
take the flood server ES URL and I will change the flips over ES
value for this. Now, for URL, we need to provide the private IP
address of our fleet server. So I'll go to VM instances, copy this private IP address
of fluid server, go back, and then we will change the private IP
flood server value to the actual IP address
of the flood server. Next, we need to supply the service token that
we just generated. It will copy all this and
change the value service token with the new value
of fleet Server Policy, not for your system. This could be different, and after that, we need
to add two more values. Fleet server ES insecure
this command is needed because our Elasticsearch server is using self-signed
certificates. And also the last
insecure is because our flipped server is also using self-signed
certificates. Now in your production
deployment, because you will not be using
self-signed certificates, you do not need these
last two commands. Now on the flip side, our terminal, let me
just expand it a bit. If I do an ls, you
will see that there is a directory for
elastic agent. I will cd into this directory. Now here I will type in sudo dot slash elastic Agent
install and minus F, and then I will use
the escape character. Now first we will supply the
URL of our fluid server. So I'll copy it from the notes, paste it here, and hit Enter. Next, I will supply the
fleet server ES URL. So the URL for my Elasticsearch server to
connect to and it entered. Next, I will supply
the service token, copy that, paste it
here, and it entered. After that, we'll supply the policy under which we will
enroll this fluid server, which is the default
policy and it entered. Next we will supply the
ES insecure command, which is self-signed
certificates on Elastic Search again. And finally, we will supply
the insecure command, which is self-signed
certificates on fleet server. Now I'll hit Enter
and this should start the deployment for our elastic
agent on my fluid server. First log tells you that it is generating self-signed
certificates for fluid server. Next, it tries to start
the fluid server. Then you can see that SSL, TLS verification was disabled because we're using
insecure mode. And finally, you've got that elastic agent is
successfully enrolled. So now what we can do is we can move
backward Kibana page, and you should be able
to see that flood server was connected and
I'll hit on Continue. Now, if you are fluid
server was successfully connected and is
configured properly, you should see the
status as healthy. And when you click on it, you'll be able to
see some details. And next we need to
check if it is able to send logs to Elastic
Search, which it can. Now, if you run into any issues when you're
deploying this fleet server, what you can do is
if I go back to my Visual Studio Code and
clear out of everything, you can type in sudo system CTL, data's elastic agent, look at the logs and you should be able to
see the logs here. Now, apart from this, if you've deployed the
fluid Server Agent, but for some reason it's not connecting and you
need to uninstall it. If you do that by going into user bin and then elastic agent. And here, if I do a minus h, I need to add pseudo. You should be able to see the various commands available with the elastic agents script. There is an Android command
if you want to enroll agents into this fleet
that is installed command, which we've just used to
install the fluid Server Agent. You can restart the demon, you can run the demon, you can status the demon, and you can uninstall
this elastic agent. So there might be some
scenarios where you need to uninstall,
re-install the demon. This is how you can do it. Basically, just go to user
bin elastic agent and type in uninstalling exit out of it now and clear
out of the screen. Now because of the classic agent is installed and healthy, what we can do is we can go to installing the elastic
agent on our web server. So to do that, let's go to our terminal to on
the web server. Now on web server first we have to download elastic agent. So I'll copy all this
from our notes and paste the command on the
terminal and hit enter. Now once the file
has downloaded, we can extract this
dot star minus Z F plastic agent and hit enter. Now once you've done that, let's move back to our Kibana
page and then go to fleet. Now here, there is
an add agent button. If I click on Add agent will leave the default agent policy. Now we've already downloaded
the elastic agent. Next, it gives us a command. So let's copy this and
go back to our terminal. I'll paste this command, but I will not run it. But I need to do is I
need to append minus, minus insecure for this command because we are using
self-signed certificates. Again, for your
production deployment, you will not need to add this. Now if I hit Enter, now I forgot to go into elastic agent directory
first and do an ls. And from here, we need
to run this script. I'll hit the up
arrow until I get the command and now I'll
run it and type in. Yes, and I've got elastic agent has
successfully installed. So I'll go back to my Kibana and close at the agent was
installed properly. I should be able to
see it on this page. So now you can see, I can see the web server. So again, we'll click
on this web server. We'll make sure we've
got logs flowing in from the web server
as well, which we have. Now before I let you go, I want to show you
one more thing. Now, elastic agent makes
use of integrations. Now because our web server
is an Apache web server, we can enhance monitoring
for Apache server. What we'll do is we'll
go back to fleet. And then what we'll do is we
need to add an agent policy. So we'll go create agent policy, will keep this policy
name web server policy. Now we'll add some description. This policy is used to collect
Apache metrics and logs. We'll leave everything
as default and I'll click on Create agent policy. Now this policy is created. What we need to do
is we need to add some integrations
onto this policy. I'll click on the
left navigation menu, and that'll go down
to integrations. So under management
integrations, here we can see the various integrations
that are available. For now. We are
interested in Apache, Apache HTTP server, so
I'll click on that. Now this page gives
you an overview of the Apache HTTP
Server Integration. What we'll do is we'll click
on Add Apache HTTP server. Now from configure
Integration speech, we can give it a name, so we'll leave it as
default Apache one. Next, we'll keep
the collect logs from Apache instance on. I do not want to turn on the
experimental feature and we'll leave the
collect metrics from Apache and sensors on as well. Now next, we need to specify a policy where to apply
this integration. So we'll click on the
drop-down and then choose the web server policy
that we just created. Then we'll click on
Save and Continue. And now the message says to
complete this integration, add elastic agent to your
host to collect data. And I'll just click on
Add elastic agent later. Now what we need to
do is we need to go back to our fleet, fleet. We need to go to
the web server one. So I'll click on that. And under here, what
I'll do is I'll click on Actions and I'll click on
assigned to a new policy. Then I will choose the
web server policy, and then I will click
on Assign Policy. Now before I do that, it says the selected
agent policy, but it collect data from
two integrations, system, which is default and the Apache HTTP
Server Integration which we just configured. So I'll click on Assign Policy. Now once we have done that, click on this and it says the inputs for this integration, our logs and metrics from
our Apache instance. And the system
integration will collect logs when logs and
metrics from our OS. Now next let's go to the
left navigation menu, and then let's go to dashboards. Now here you'll
see that there are some more dashboards
available to you. What we'll do is
we'll type in Apache now to check if we're
getting Apache logs, we can click on Access
and error logs. And from the left
drop-down under hostname, we'll select our web server. Now, if your integration
is set up properly, you should start seeing
some data like here. I'm seeing all the
response scores. Now. Next, Let's check
another dashboard. So I'll go to dashboard
and click on Apache, type in Apache again. And this time I'll
click on metrics, Apache overview to
see if you're getting metrics data from our
Apache server or not. Now here, under
requests per second, you can see that there is
some data flowing through. This is how you can install
a fleet server and then enroll your web servers or any other application
servers into your fleet. And then you can use
integrations to collect some meaningful data from your applications
or web servers. With this, we have come to
the end of this lecture. Thank you for watching. I will see you in
the next one. Bye.