Grafana | Sean Bradley | Skillshare

Playback Speed


  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

51 Lessons (6h 7m)
    • 1. Grafana Course Introduction

      2:36
    • 2. Install and Start Grafana

      11:47
    • 3. Upgrade/Downgrade Grafana

      4:09
    • 4. Point a Domain Name

      4:59
    • 5. Reverse Proxy Grafana with Nginx

      6:11
    • 6. Install an SSL Certificate

      6:57
    • 7. Create our First Data Source

      3:23
    • 8. Panel Rows

      1:54
    • 9. Panel Presentation Options

      3:46
    • 10. Dashboard Versioning

      2:05
    • 11. Graph Panel : Visualisation Options

      18:09
    • 12. Graph Panel : Overrides

      4:05
    • 13. Graph Panel : Transformations

      4:38
    • 14. Stat Panel

      4:22
    • 15. Gauge Panel

      1:26
    • 16. Bar Gauge Panel

      1:05
    • 17. Table Panel

      6:54
    • 18. Create MySQL Data Source, Collector and Dashboard

      22:14
    • 19. Create a Custom MySQL Time Series Query

      10:50
    • 20. Graphing Non Time Series SQL Data in Grafana

      6:15
    • 21. Install Loki Binary and Start as a Service

      10:01
    • 22. Install Promtail Binary and Start as a Service

      6:12
    • 23. LogQL

      18:02
    • 24. Install an External Promtail Service

      16:45
    • 25. Annotation Queries Linking the Log and Graph Panels

      5:42
    • 26. Read Nginx Logs with Promtail and Loki

      13:11
    • 27. Install Prometheus Service and Data Source

      5:38
    • 28. Install Prometheus Dashboards

      4:38
    • 29. Setup Grafana Metrics Prometheus Dashboard

      6:02
    • 30. Install Second Prometheus Node Exporter

      7:33
    • 31. Install InfluxDB Server and Data Source

      8:37
    • 32. Install Telegraf and configure for InfluxDB

      7:54
    • 33. Create A Dashboard For Linux System Metrics

      4:06
    • 34. Install SNMP Agent and Configure Telegraf

      9:15
    • 35. Add Multiple SNMP Devices to Telegraf

      6:27
    • 36. Import an SNMP Dashboard for InfluxDB and Telegraf

      4:44
    • 37. Create and Configure a Zabbix Data Source

      5:13
    • 38. Import Zabbix Dashboards

      6:23
    • 39. Elasticsearch Data Source

      8:35
    • 40. Setup Elasticsearch Filebeat

      7:49
    • 41. Setup Elasticsearch Metricbeat

      4:31
    • 42. Setup an Elasticsearch Dashboard

      3:45
    • 43. Dashboard Variables

      15:39
    • 44. Dynamic Tables from Variables

      4:35
    • 45. Dynamic Timeseries Graphs from Variables

      4:08
    • 46. Create an Email Alert Notification Channel

      9:51
    • 47. Create Alerts for SNMP No Data

      14:43
    • 48. Create Telegram Contact Point

      4:32
    • 49. Users and Roles

      9:06
    • 50. Teams

      2:14
    • 51. Orgs

      2:57
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

251

Students

--

Project

About This Class

Welcome to my course on Grafana

Grafana is the leading open source tool for visualizing metrics, time series data and application analytics.

I demonstrate many things in this course, with all the example commands provided for you to easily copy and paste.

This is a Learn by example course, where I demonstrate all the concepts discussed so that you can see them working, and you can try them out for yourself as well.

With this course, comes accompanying documentation that you can access for free. You will then be able to match what you see in the videos and copy/paste directly from my documentation and see the same result.

In this course we will,

  • Install Grafana from Packages

  • Create a domain name, set up an Nginx reverse proxy and install an SSL certificate

  • Explore the Graph,Gauge, Bar Gauge, Table, Text, Heatmap and Logs Panels

  • Create many different types of Data Sources from MySQL, Zabbix, InfluxDB, Prometheus, Loki and Elasticsearch

  • We will configure their various collection processes such as MySQL Event Scheduler, Telegraf, Node Exporters, SNMP agents, Promtail and Beats

  • We will look at graphing Time Series data versus Non Time Series data

  • We will also install dashboards for each of the Data Sources, experimenting with community created dashboards plus experimenting with our own

  • We will monitor SNMP Devices using Telegraf Agent and InfluxDB Data Sources

  • Setup Elasticsearch with Filebeat and Metricbeat services
  • We will create Annotation Queries and link the Log and Graphs panels together

  • We will look at Dynamic Dashboard Variables, Dynamic Tables and Graphs

  • We will look at creating Value Groups/Tags and how to use them with different kinds of data sources

  • We will set up Alerting Channels/Contact Points, understand the different alerting options, configure an example of it to detect offline SNMP devices and demonstrate receiving email alerts via a local SMTP server

At the end of the course, you will have your own dedicated working Grafana Server, which will be in the cloud, with SSL, a domain name, with many example Data Sources and collectors configured, that you can call your own, ready for you to take it to the next level.

Once again, this is a Learn by example course, with all the example commands available for you to copy and paste. I demonstrate them working, and you will be able to do that to.

You are now ready to continue.

Thanks for taking part in my course, and i'll see you there.

Meet Your Teacher

Teacher Profile Image

Sean Bradley

Course Instructor

Teacher

Hello, I'm Sean.

For over 20 years I have been an IT professional developing and managing real time, low latency, high availability, asynchronous, multi threaded, remotely managed, fully automated, monitored solutions in the education, aeronautical, banking, drone, gaming and telecommunications industries.

I have also created and written hundreds of Open Source GitHub Repositories, Medium Articles and YouTube video tutorials.

See full profile

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Grafana Course Introduction: Okay, Hello and welcome to my course longer father. Now data can come from many different places, even more than you can imagine. Megafauna provides a way for you to visualize that data through graphs and some level of reporting and alerting. So we need to install Grafana, and I'll show you how to do that. Now, Grafana doesn't exist by itself. It needs to extract data from all kinds of data sources. There are thousands of them and all will provide you instructions on some of the most popular. For example, MySQL, loci, Prometheus in flux, dv, ZAB, ECS, Elastic Search. And we'll use those data sources to retrieve the data in a time series format from the various services a day are written to query. For example, MySQL data can be returned in other time series or tabular loci prompt OWL can work together to read log files from a server. Prometheus has made kinds of exporters for all kinds of other services in flux dB, very similar with cervix and Elastic Search. So we'll experiment wall kinds of data sources which are all conceptually different from each other. So that you have a very good overview of how you might approach collecting data from the thousands of other available data sources out there. Okay, But anyway, at the beginning we'll look at the test data source, which means we won't need to install anything extra other than the initial grafana server installed. And we can start experimenting with a user interface and the different kinds of visualizations available for us, the default install, now predominantly through the course, I'll be using Ubuntu LTS service because it's very easy for beginners to understand it and I'll provide all the commands in my company website. So for example, in the resources alongside each video, there'll be links pointing to various pages in my documentation. For example, install prompt L, binary stars or service for the commands are written there so that you can just copy and paste into your server so that you can get right on with it without much delay. Okay, So during the course I recommend copying what I do by using the same versions of the software, the same operating systems each time, so that you consider working for yourself once you have a familiarity with Grafana and all the various data sources available to you, you'll then be able to apply your knowledge to your more bespoke situations. So remember, grafana is a tool to help you visualize data from other systems. It doesn't exist by itself, but you need to develop that skill in order to reverse engineer other systems no matter what they are and understand how to get data from that system into Grafana so that you can visualize it in Grafana, create alerts, or do further analysis. So hopefully at the end of the course, you will have developed those skills. Okay, So thanks for taking part of my course and let's get started. 2. Install and Start Grafana: Okay, So let's install this Docker file. So using this link here, the official download link for Grafana, we can open that. Okay, So currently we're at version 8.2.3. Something you should be aware of with Grafana is that there are new versions all the time. Every couple of weeks is pretty normal. There are also nightly builds that you can use, but I'm going to select the lightest which is 8.2.3 and select oasis. This is the open source version. This will give us more in-depth knowledge of what's going on in profile. But later on, once you've finished with the course, you can always go to the enterprise version, which gives you some more options plus a certain amount of support. And also you will need to create a freak or foreign a Cloud account. But you can do all these things in your own time. Right now, we'll use the OSS version, so be sure to make sure that's selected. Okay, Also, I'll be installing the Linux version, Nabi installing it on an Ubuntu LTS server. Okay, So these are the commands here that I'll need. Now, you can also install another operating systems, but I recommend while learning in this course so that what you see looks like what I see. Use Linux also in case you don't have Manoch services at your disposal. I recommend getting them from digital ocean. Okay, So on my official page here, here, you can get them from Digital Ocean. You'll get free credit. Commonly, it's $50 for 30 days, but most of the time lately it seems to be $100 for a 60 day period. So if you were to visit that link, it says here free credit active gets started on Digital Ocean, $100.60 days credit for new users. So I recommend using this because we will create lots of servers and delete them and it doesn't matter if you made mistakes. Also, this is a better method then using any existing servers you might have already production servers you might have because we will break things and you don't want to break anything too important such as the personal computer or any production servers at your work, just isolate your learning process. These throwaway service. It doesn't cost you anything at the end, and we won't use up that $100 credit in 60 days anyway. You wouldn't even use up $50.30 days. But anyway, I also have an offer for hits now that you can use. I've used digitalization and Heston all the time. These are my preferred services. You also have the choice of using AWS or GCP. And there are many more Cloud providers, but just so that it matches what I show on the videos. And it's not going to cost you any extra because there's free credit for new users. I recommend using digital ocean. Okay, So let's get ourselves a lot of server to install Grafana. I'm going to use the lightest scoop onto LTS from Digital Ocean login to my digital ocean. And I'm going to show you how easy it is to start up your own droplet as they're called on Digital Ocean. Okay, so create a droplet. This is a server, okay? So here there are many operating systems or choose from the lightest long-term support or LTS. Ubuntu is version 20.04. At the time of creating this video, soon there'll be a version 22.04. You can also use that once it comes out. Oh, update my documentation to support commands for both 20 and 22, 0.04. If applicable, choose a plan. Isaac is good enough. I'm just going to choose the $6 a month default here. One gig, 25 gigs of SSD and 1000 gigs or transfer. This is plenty for learning and installing our first grafana server. There are other options you can mess around with ominous like a six pack of months, and that's good enough, okay, Scrolling down, choose a region. But anyway, you like, I can put in Amsterdam, perhaps VBC network leave as default authentication. If you know how to use SSH keys, I recommend using SSH case. Otherwise, you can create a password for logged onto your server. I'm going to use SSH keys, and I've already created one in digitalization that I can use. Name it anything you like, I'm going to call it grew far and correct droplet. Okay, so after one minute, you'll be given a public IP address. That server will now be available for you to use on the internet. Okay, so that's the IP address that I was just given from Digital Ocean. I can copy that. Now. I can login to that server using an SSH client. I'm going to use a program called potty, which is quite popular when you're running a Windows operating system. So I recommend using that if I've clicked the link, that's the website, this latest release day, and you can download and install for if you like. Okay, so I've opened up party. Now on my system, I'm on the session configuration page, paste that IP address in there is going to be using default port 22 for SSH. I can save that for light up. So just type anything you like. Press Save case that's there for later if I need now before I open that, since I've used SSH authentication, or now need to point it to a local copy of my SSH key. Okay, so that's done. I'm also going to change the appearance from the default font because it's quite small. Press Okay, go back to Session and then just press Save my case that's now saved. If I double-click that, it will give me a new SSH window. I'm going to login as root. Okay, So I've now logged on to my Cloud server that I got from Digital Ocean where I'll install Grafana. And that. The public IP address that I was given, you'll be given a different IP address. Okay, so before we run the commands to install Grafana, it's often good practice to update the APT cash. So sudo, APT update. Okay, so this is getting the latest download information for any libraries that we can install and that's done now. That's just common practice. Okay, going back to the official install on the grafana page, first-line, pseudoagouti get installed y, which means choose yes Bernie choices it might give us. And then it will also install two dependencies called Add User, ad lib, font conflict one. So copy that in party. If you just right-click on your mouse, it will paste it into the screen like that. Okay, press Enter. Now if we look at what was written there from the command that I just typed in, AD user is already the newest version, so there wasn't installed, but lib font config 1, the following new packages will be installed. There we go, live phone conflict one plus this dependencies. So that's now done and ready. The next line, w get, we're downloading the DBN package installer from the grafana downloads website, the OSS version and the release being 8.2 points three, there are many, many releases. Just to show you that there are many, many, many releases you can download anyone you want to run. I'm using three, so copy that. Once again, right-click and paste it into the console. If I just hide myself, you can see Grafana 8.2.3 AMD 64 and press Enter. Now that has downloaded a 100 percent excellent now to run the Debian package installer that slash ai their manes in-store, In-store Grafana, AIG, point-to-point three, AMD 64. Okay, That was pretty quick. Now we can start the grafana server by executing that line there, so I can highlight that and then right-click and paste that in and I can press Enter. Or another way of doing it is to run sudo service grafana server start. So okay, we can check its status by typing that. And it says it's active running excellent control C to get out of that page. Now, remembering that IP address that I was given from Digital Ocean and that I also logged on to using buddy copy that open a new browser, ACIP dress, and append colon 3000 to the end. And that is your new grafana server on the Internet being hosted at that IP address, colon 3000. The login is admin, admin, so AD MIN, AD MIN, although case. Okay. You are then asked to change the password or skip it. I'm going to change the password and submit. And I've logged on to the new grafana server, so ready to continue? Okay, So now I've used a digital ocean server there. I could have used headstart and the process would have been very similar if you chose to use AWS, GCP user or other providers, port 3000 might not be open for you instantly like that, you may have to modify a firewall settings in the user interface that your cloud provider gives you. For example, I'll show you the AWS version of doing just what I did. Okay, so I'm in AWS EC2 launch instance. I'm going to choose Ubuntu 20.04, LTS, 64 bit. I'm going to try this T2 micro configure instance details. I'm using all defaults at storage. A gigs review and launch. Launch is assisting do for RSA. Launch instance. Okay, your instances and air launching, Okay, View Instances. So ID pending. Running. When creating your AWS version, you'll be given a public IP address up there, put that into party, save your settings, and you can open that. You'll have to use the SSH key that AWS provides or that you've set up and the username will be OO bond to, okay, same process, sudo APT up tight. And I'm just going to copy old eyes in one, go in, right-click and run. It looks like it didn't run these other two. So copy that. W get very good. Network seems slow on this macro AWS. Hey, let's run the package manager TO. Let's start by loading up right-click. Let's check the status service profile. So the status says it's running Control C, the IP address given to me from AWS was that one. That's the public IP. So copy that, type it into a browser. Colon 3 thousand, okay, So it's unlikely to work with AWS. Gcp is your many others. You might need to set the firewall rules. So down here on the security tab, inbound rules, we have a port open for port 22 already TCPIP will have to create another one for port 3000 on the security group. Okay, so let's Edit inbound rules, add a role custom three, sales and, and being all IP addresses from anywhere, IP version 4. So roles. Let's follow it again in the browser. And there we go. We have Grafana running on AWS. Now, I'm not a fan of AWS. I won't be using AWS that often throughout the course, but I'm just showing you there the extra steps you need to take if you want to use AWS, I prefer to use digital ocean or headstart because it's much simpler and I have all the control I need. We do need to set firewall rules using digital ocean or it's not. We also have viable options in their Cloud user interfaces, but you also have the option to use what's called IP tables. And when those times come up, I'll show the IP tables commands, but use AWS. If you want to note that I won't be using AWS and the course only using digital ocean. Okay, So be sure that to match what I'm doing in the course, use digital ocean and party and install the latest Ubuntu LTS. And this is microphone or server running from Digital Ocean. Excellent. 3. Upgrade/Downgrade Grafana: Okay, upgrading Grafana. Now I'm running version I'd point 2, point 3. A good way to verify that is to go to one of these pages, for example, configuration data sources there. And at the bottom it says, I'd point to 0.3. Now I haven't done anything with my server, so it's very safe to upgrade this to a point-to-point four, since a point-to-point four is now available. Now, since I'm using the open source version, there is no guarantee of backwards compatibility. So this is something you do at your own risk. But right now, I don't have any data sources, so this is completely safe thing to do. Do expect that if you do upgrade Grafana summer, your data sources or dashboards are no longer going to work. And that's just the reality of using Grafana is that there are new versions of graphene or every two weeks and something's going to break. So just become good at it after a while, figuring out, fix up something that no longer works. If you do decide upgrade grafana is best to do it in small steps. Say, for example, I'm going to go to 8.2.4. That way they were aren't going to be that many problems as can be easy for me to refer to the documentation and find out what has changed since the previous version. But if you were to jump from version five to version eight, don't expect much to continue to work from them because Grafana changes quite a lot over the different major versions there. So that's just the reality of using Grafana. So since I'm using digital ocean before I do an upgrade, I can do a snapshot of the server. So I'll megafauna service, admin page and Digital Ocean is an option here called snapshot. So I can take a live snapshot and that takes a minute or two. And then if I've broken my grafana server, I can always restore from the life snapshot now done snapshots several times in the past and recovered from them. And they work very well. They just take a few minutes to create and to restore from whatever Cloud platform you use will have a different version of backing up or taking snapshots. For me, the snapshot is by far the easiest way to do it. I'm not going to take a snapshot of my grafana server since it's just brand new anyway, and there's no risk whether I break it or not. And if I do break it by upgrading to a point-to-point four, I can always reinstall 0.2.3 anyway, the other consideration I have is when I install Grafana, I select the OSS version there, and I used these instructions here, but actually they were written as I'd point 2, point 3, when I installed them, the upgraded version just shows the difference in numbers there. If you want to know what all the numbers are, it's just in this drop-down here. So 8.2.3, they were the install instructions. 8.2.4. It's more new and store instructions. Now my server already has that, so I don't have to install that. So I'm just going to run that. I'm going to download the APK point-to-point for AMD dBm package. So I copy that. I'm on my server, right-click. So I'm using W get to download the package. Okay, a 100 percent was very fast. Now I can install it by running. The Debian package manager says I'd point-to-point for this technique I've shown you can be used to downgrade or upgrade, doesn't matter, just the numbers of the important things here. So right-click on earth and i means install as a Debian package manager profile, a point-to-point four, press Enter. So unpacking Grafana, a point-to-point four over 8, 223 there. Okay, So it's installing. Now, the whole process is still going to be happening in the background for about a minute. If I go to this page and just refresh it, it now says 8.2.4 at the bottom. Now, this can take about a minute to happen. So I was still says the older version for you. Just give it a little more time. Okay, So that's now a point-to-point for now. That's pretty much it for upgrading and downgrading. But remember it's open source, it's at your own risk. And to minimize the amount of work in fixing problems due to upgrading Stew one number at a time, 8.2.4, mains, major, minor or patch, either update age patches they come through or update minor version. So 8.3 is the next minor version, but the major bang version nine, I wouldn't recommend going straight from version to version norm without at least going through some of the minor and patch versions beforehand. To remember, take a snapshot before you start the upgrade process. If you think that's important enough, using the options provided by your Cloud platform, Excellent. 4. Point a Domain Name: Okay, So this is my grafana server that I've just set up. This is my digital ocean version. That's the IP address that digitalization gave me. You will probably have a different IP address depending on which Cloud provider you used. And it doesn't matter, it's always different. Now, that is a URL are used to access megafauna server. And while that's okay, it doesn't look very professional if I was to show this to a client. So in the next three sections being point domain name, reverse proxy, refine and add SSL, or do several changes to my set up so that rather than accessing megafauna server using that address, I can access it using a more professional looking HTTPS, Grafana dot ASP code dotnet, just any domain name that you want to use to represent your service. Okay, So just looking at that for the moment, this is optional. You can continue to use your IP address colon 3000 to access your grafana server if you wish to. I'm just showing you the steps in order to create a domain name and use HTTPS without the port number just so that you know how to do it. This is not a Grafana specific technique. It can be applied to all servers, services that you're on the Internet, regardless of whether that's graphene or not, I'm just going to show it to the next 36 hours anyway. If you're not interested in that, you can just go straight down to the next section, crate the first data source. Now, later on in the course, we will be doing email alerts depending on which e-mail provider you send that email or to, the e-mail will probably be rejected if the domain name of the sender doesn't match the IP address of the server that is coming from. So That's one of the benefits of using domain name in SSL. But you can delay that process if you really want to and just move on today. Anyway, let's get on with it pointing a domain name. Now, as I said, I'm going to use that URL profiler dot ASP code.net. Now I already have a domain name, and that is, is because dotnet, so I can just create a sub-domain for that being Grafana, which I'll demonstrate in a moment. If you don't have a domain name already or friend can't lend you one. I can recommend name cheap, which is, as it says, cheek domain names depending on which TLD you use. For example, those. So visit that link and you get a search tool and you can type in any domain you think you want, doesn't matter what you want. And be my service, my refiner service, for example, search. And you can buy any of those domains for that price, microphone or service dot x, y zed is very, very cheap for the first year that I already have a domain name. So I'm not going to use that. I'm going to use my existing domain name, SBIC 0 dotnet. Okay, so I've logged onto diamond shape where I can manage my speaker dotnet to mine. And I've pressed the Advanced DNS tab. And down here I can add what's called a new name record go. So looking at this first section point of my name, I'm going to create an I9 record for Grafana dot is because dotnet, a name record that points to that IP address there, and that's the IP address there that I got from Digital Ocean. Your IP address will be different and your domain name will be two. You can either point your main domain name that you bought or a sub-domain of the main domain name. It's up to you, okay, so I'm using the name cheap user-interface, depending on where you buy your domain names from the user interfaces will be different, but they'll all have a similar concept where you can add a new record being an a record. Okay, so here in this row here, I've got a record for the host, which I'm gonna call grafana. The IP address being that copy, their automatic, which is good and just press Tick. So changes. Okay, after some time that addressed or foreigner dot ASP code dotnet. See I'm modifying for my ASP code.net domain here. You can also see that written here, I name record alias, that actually means ASP code dotnet. So Grafana dot ASP code dotnet to that IP address. Okay, so now if I try that address, so instead of trying that where I'm using an IP address, I replaced that with the domain name grafana is because dotnet. So if still using colon 3000 HTTP, grafana is the codon, It's 3000 open. A new browser hasn't propagated just yet. You can take some time. You can use online tools called DNS checker, something else or a command following onto your server to check the progress of propagation. Open up that one. I'm going to type in just the domain name, bigger font, I dot ASP code dotnet, and search that a name record as a near propagated throughout. Okay, so let's check that again. And it looks like it's slowly propagating. It can take some time. It's taken two hours so far for me to get that far. Sometimes it's much faster and sometimes it's much slower. Let's try that again. Okay, and that's working brilliant. So Grafana dot ASP code dotnet, the domain name colon 3000 works. So I can move on to the next section. Now, I'm going to reverse proxy Grafana using Engine X. So that all cried a server-side redirect which will host my domain on port 80 and then internally redirect to port 3000. So I won't need to use that colon 3000 anymore anyway. That's in the next video. So excellent. Let's continue. 5. Reverse Proxy Grafana with Nginx: Okay, so now it's a reverse proxy grafana with Engine X. Now the reason why I want to add a reverse proxy in front of your file, because in the next step, I will add an SSL certificate and Obanya SSL certificate to the engine X proxy rather than to the grafana service itself. Also, using a proxy is a common approach to changing the port number of a service running on the internet in a more generic way, rather than modifying Grafana sittings explicitly, it's a concept that you can use the many other services that you might be hosting on a surface. Anyway, the purpose of this next section is to remove the need of typing in the colon 3000. So rather than typing that address, which works now, and that's it, they're grafana is because dotnet colon 3000, I will just be able to type in that address http. Big farmers, didn't it? Without the colon. Okay, So for that ability or using generics. Okay, so SSH onto your server. Okay, so I've logged onto my grafana server that I got from Digital Ocean and I can first test to see if Engine X is a stored. It's very unlikely to be Engine X or hyphen v for version. Now it's not there, so we can install it for that. I'll use that command there. Just a generic install command for Engine X. Yes. Very good. Now we can test the version again. If I just press on the up arrow on my keyboard, it shows me the last commands that I've just taught. So engine next hyphen v version 1.18. Excellent. We can continue. Okay, So we can check the status is should already be running sudo service engineer status. Yet Engine X is actually running patrol seat. Get out of that. Okay, Now, Engine X, by default, is hosting a very simple website on port 80, not port 3000, like Aquafina service. So if I just type that IP address by itself into my browser and I don't have to put in colon polarity like that. It's just default behind the scenes. Just put the IP address and it will show me the default engine next welcome page. If you're using AWS, Azure, GCP, or any other similar service, you may have to set up a firewall rule. So neither of us, I had to open port 3000 in the security group. You'll have diaper up a port 80 and your security group in the equivalent user interface section of your cloud provider. But in Digital Ocean and head-start port 80 is accessible by default as long as there's something on your server exposing port 80, and that is Engine X. Right now, it's listening on port 80. And if I visit our IP address it directly, regardless, if I put in port 80 or not, it's going to serve me just use simple web page, okay, so now to create a specific configuration for violence. So we can also visit our grafana service through port 80 using HTTP. So cd to the Engine X sides and I will follow us. So usually that is ETC, Engine X sites enabled. So just copy that whole line, right-click CD VDC engineers thoughts enabled. If I type ls, it shows me there's one file in there called default. That default is instructions that tells Engine X to serve this content when someone visits port 80, that IP address. Okay, so we're going to add another configuration that will listen for the domain name that we typed in. So we'll create a new fall sudo nano, whatever your domain name.com fears. Okay, So before that I'm just going to clear the screen. Okay, so sudo nano, your domain name.com. So my domain name is a foreigner, dot ASP code, dotnet.com. You can actually name it anything you like. I'm just naming it the same as my domain name. So this will create a new file name to that in the site snapshot folder. And nano is a text editor for lineup, so it's quite easy to use. So enter, okay, So the text editors opened up and so we can now write the contents of this new file. Okay, so going further down, just copy this section here. You can use that icon if you like, copy to clipboard. Now, right-click and it will paste the contents into the text editor there. So up here for server name, change that to foreigner dot ASP code.net, and make sure we got semicolon at the end there. That will cause the engine X proxy, which is essentially like a web server itself, to listen on port 80. But if the domain name typed in was Grafana SB code dotnet, it will redirect the TCP connection onto http localhost colon 3000, and that's where Aquafina services listening. So proxy pass. So control x instead, option down there, Control X to save and select Y to say yes. Okay, so if I type ls now, which means list the contents of the folder, it shows this two files, default, end and other one, corporate finance.net dot com. So let's just verify that the configuration is correct. You can do that engine ex-wife and T test. Okay, So syntax is OK and the test is successful. So the syntax of microfinance, because it was okay. So let's restart Engine X. Restart and let's check its status. Very good active running controls. So you'd get out of that. Now, if I open my browser and visit just that address, doesn't matter if I put http or not. I'll just put that in Grafana ASP code dotnet. Just like that, it takes me straight to microphone our application. So we go Grafana dice be co.net is no longer necessary to put in the colon 3000. Just remember though, this is using default port 80 now. And since I'm using a Cloud provider, which doesn't force a firewall and funny or service like AWS doesn't several others. Port 80 is already accessible. Now, if you're using AWS, you'd have to add to your security group. And just so you know, port 3000 does continue to work if you wanted to use that, but it's no longer necessary. So you can actually remove that additional rule from your security group in AWS if you wanted to, It's not necessary. But later on, I will create an IP tables rule to block a port 3000, but that's in the next video. Excellent. So that's what we did in this video, removed the need to type in colon 3000. So already That's looking much better, but it's still not perfect. If I look at that, it says not secure. So in the next video, we'll add an SSL certificate and bind it at the engine X proxy. And we'll be able to use https instead of http. Excellent. 6. Install an SSL Certificate: Okay, So let's add the SSL certificate down. For this, I'll use a free certificate service from search bot that will then fix up the URL problem here where it says not secure. And looking at this, we will no longer access our URL via HTTP, but via HTTPS. Okay, so follow the instructions on the CRT Bot website. So I've just outlined what I'm doing down here. So if we go to the support website, so my http website is running engine X on Ubuntu 20, okay, so it gives you specific instructions below. We've been 220, 0.04 LTS already has snapped D installed, so we're gonna need to do that. The future version diversion coming out soon of your body, 22.04 LTS will also have snapped D installs. So we can just ensure that our version of step D is up to date or copying that and going on to our server using SSH. Okay, so I wanted to microphone a server they got from Digital Ocean that right-click pseudo snap install core citizen app refresh, cool. That's just making sure that I had the latest version of snapper. That's good. That after remove any older version. So I didn't have any install cert bought place, pseudo snap install, bought. Very good, made to prepare the search bot command so that we can execute it from the command line. Now, we just run sudo sit bought Engine X like that because we're using an age next server. So it will ask us to put in some information which we have to do. You need to agree? Yes. You don't have to share your information now because I have credit a configuration before for Grafana dot ASP.net it has founded. So it says which nine would you like to activate hasty to be as for office number 1, like so. Now it's requesting a certificate. Now it's important that your domain name has fully propagated when you run that step, because it will verify the domain name points to the same IP address from different locations in the world. So that's works for me. So it says successfully received certificate. Certificate is saved at ADC. Let's Encrypt live for foreigners because.net full chain PM and a proof CAPM. So for me, it's been successful. Now if I just visit grafana is because dotnet in the browser, just like that, I'll leave off HTTPS, open up a browser and then just put that in, press Enter. It is automatically chosen HTTPS and I've got a pad lock, so that looks much more professional. Now, it's all optional whether you do this or not. This may not be important for you, but if you are managing a grafana service for clients, it's important that it looks professional and that's one of the things that you can do. Having a domain name is also useful when it comes to sending out email alerts because your e-mail provider will do a reverse DNS lookup on your IP address in it should resolve to the same name of the server that sent the email address. I'll show that later. Now to understand what search bot has done, if we just clear this. So it's taking that configuration file that was in ETC, sites enabled, so sudo, ETC, and genetics sites enabled Ls. And if we just look at this file here, graphene S because dotnet.com, and we'll see what it did though. We can just write cat, which allows us to read text files or foreigner. But it's big code.net dot com. And we can see that it has modified the file a little bit. This is what we originally wrote down here to serve an anchor finance because dotnet, but instead now is returning a 404 not found, but before it gets to that point, if host equals grafana is because dotnet, it will do a 301 redirect back to our browser, which tells the browser to use HTTPS colon slash slash instead, whatever the host and the IRR was. Okay, so port 80 is still being used, but it's being used to return a 301 redirect pointing to the HTTPS version of the website now surpassed right. All that for us. If I go up while server name is grafana is because dotnet proxy pass to hate it to be localized 3 thousand, so that's still good. We're listening now on port 443, that's the IPV6 version. And listen for 43 SSL, That's the IP for version. Now I don't have IP version 6 enabled on my digital ocean server. So that line is pretty much going to be ignored. But you may have that on your server one day. And here there's more commands pointing to the location of the certificates that we just installed, shame PM and proof CAPM and excellent. So so bot is doing a whole lot of things for us. These certificates don't last very long, but behind the scenes, snap D and search bar are both making sure that that's typically gets updated when it is about to expire. So excellent. Okay, So if you're using AWS, you probably have to create a new incoming rule in your security group for port 443, you should also leave port 80 open and it's safe to remove the rule that was created at the beginning for port 3000. Now since I'm using Digital Ocean port 3000 is still open, so this will actually still work if I did colon three sales and put that HTTP. All right, I don't really want that to work anymore. I can actually create a firewall rule to block 3000 on one digitalization server. Okay, so since I'm using Ubuntu and just going to clear the screen, I can list any IP tables, rules that I have. Ip tables, there are none, okay, So first thing I wanna do is I want to still allow port 3000 to be called internally because we have the engine X proxy forwarding to localhost 3000. So I can use that command, their IP tables, a input TCP source 127001, destination port 3000, and drop everything else. So anything else trying to call 3000 will be dropped enter. So IP tables L, We now have a rule for port 3000, so we can accept port 3000 if it's being asked from localhost and drop the connection if it's being asked from anywhere else. So now if I try to visit that address, grafana is bytecode on air con 3000 directly. It will eventually timeout. And that's using a thing called IP tables because I haven't enabled the firewall and digitalization. I'm doing that because unlike AWS or other Cloud providers, Digital Ocean doesn't for our firewall in front of your servers, automatically, you have the option to manually block ports using IP tables. Okay, So that's time data eventually. But if I was to try and just visit that address in another window, doesn't matter if I type http when it gets automatically fold onto https. Aka.ms says much more traditional anyway, so we'll optional excellent. In the next section, we'll create our first data source. Excellent. 7. Create our First Data Source: Okay, so now let's create our first data source. The data source is going to be the test data DB Data Source. The purpose of this data source is for learning about, so it's all fake data. Okay, So 1 about Grafana. Grafana allows you to query, visualize, alert on and understand your metrics no matter where they are stored. Okay, So stored data is not stored in Grafana. Instead, you create data sources in Grafana, which acts like query adapters, which query data that exists in the underlying system where the data is stored, for example, in SQL server via another interface that can read log files. Why Prometheus, which can read many other types of data stored in various other systems. Some of them flux DB, Simon's epics and less search. So the data that you're querying is still stored in the underlying system no matter what that is. But in Grafana you create things called data sources, which are query adapters that will request data from the underlying system, whatever that is. So the first data source will create and grafana is the test db data source. And this is all just fake data that is created randomly in real time. Okay, so back into Grafana on this menu on the left here, down the left, this gear icon, configuration, data sources and data source. And if we scroll all the way to the bottom, select test data db. So you press that whole square, you can rename it anything you like, but dead name is perfect. We can save and test. And it says datasource, updated data sources working and excellent. Here in a dashboard section, we get a default dashboard that we can import, so we'll import that. Okay, and in setup, we can visit the actual GitHub repository where the source code is stored if you want. And that's what it looks like. And we don't need that. You can look at that if you want to k So that data sources now sit up. If we now go to the icon there dashboards manage down here. There's a new one, simple streaming example. Click that to view it. Okay, So this is the test data DB data source, just a very simple dashboard, the weekend experiment with, you can quickly just press the buttons on it to see what they do if you like. It doesn't matter if you break anything, we can always fix it up. So for example, if I move things around and I try to exit, you'll ask me if I want to say the dashboard. So let's just say I did want to say the dashboard. I can press yes. I can add my notes if I want press Save, and I can choose whether I'm saving as a new dashboard or overriding the existing, Let's just say overwrite the existing dashboard. The dashboard is saved and it's been overwritten. Doesn't matter if we go back to dashboards harm down the bottom here it shows my most recently viewed dashboard and that's how it was when I saved it. But if I decided I didn't like it that way and I wanted to go back to the way it was originally. Okay, Go configuration data sources, go back into that Dashboards and press re-import, and that will put it back to the way it was at the beginning. So if you break the dashboard, it doesn't matter. Excellent. So going back to dashboards home now, because I've recently viewed a dashboard is showing up down here. So simple streaming example, that's back to the way it was when I first do that the first time. So excellent. So feel free to just press the buttons, but doesn't really matter. We'll go into more details as we progress the next few videos. Excellent. 8. Panel Rows: Okay, panel rows, that's about grouping new or visualizations into rows. So up here, there's an option at Penn also click that there's a square there. Add a new row, okay, So it's put all those contents into a row called Roto idle. We can chase never that if we like, Oh cool, row one, update and there we go. Row one, we can add another row. So add panel at a row, once again as default row total. I'll change that to being row to update, we can move one of the visualizations into the other row. So I'll move that into row two and we go just underneath announced in row 2. And I can toggle it on and off, like so, same with row one. We can take a visualization out of a row. So I'll just lift above, announce out. So right two is now empty. Okay, Excellent. So I'll put that back into row two. So row 2 log. So I'm just going to change the dimension there. I can have multiple visualizations inside a row. I guess a right to now has multiple visualizations. I'll put that back down to row one. I can delete row one. And if I do that, that visualization will either go into the row above it or into narrow depends if there's one, they're not. So row 1 of delete that. I can delete the row and its contents. I'm just going to delete just a row only, but now that he's part of row two as part of row 2, if I deleted row two, I can delete the row only or everything. I'm currently there are only now those are not in any Radha. And of course I can sit that back to the way it was originally. Are getting into data sources, reimported the default dashboard. That's good enough. Okay. And one more thing about Penrose are just add a panel again, add a new row, so everything is now back into a row. We'll just call row total. There's another option here called repeat for repeat four is about creating dynamic panels using template variables. That's quite advanced section, we'll come on to that later. Excellent. 9. Panel Presentation Options: Okay, Let's look at some panel presentation options. When you look at this drop-down, here, we get some options, View, Edit, share, etc. And there are keyboard shortcuts. So V for view, for example. So if I move the mouse over a visualization and press V, it goes full screen. To exit that. I can press V again, or I can press escape. Something else. Edit. Let's press E. When we've hovered over a visualization, a, it takes me to the edit panel for editing a visualization where I can change all the settings. We'll look at that in the next few videos. But for now, let's just go back. We also have share case that we can share a visualization by passing a URL to a friend, var1 e-mail or sending a snapshot, we can embed, he's some iframe HTML code to do HTML or create a library panel. Now Library panel is another option in Grafana, were allows users to create reusable panels. Were any changes made to one instance of the library panel is reflected on every dashboard affecting all other instances where the pencil is used. So this is to help streamline reuse of panels across multiple dashboards, okay, so cancel that other options. Explore. This is similar to editing, but it's using the option called Explore here, where you can vary the query anyway you like without affecting the panel or the dashboard. And we'll use explore quite a lot in the next few videos. Go back. Okay, We have inspect, we can expect the data. So that's the actual data that is being used to draw this graph. That's actually a snapshot data is not updating in real time. Stats, statistics. The panel Jason, that's the code that describes the panel there. We can actually change this Jason inside the form here. So for example, if I find one where it says title, so angular graph, okay, so I can change the name of angular graph to something else. I've that and then press Apply. And now the title of the panel is something else. So looking at the pedal Jason, again, so many things can be changed there. It's pretty advanced way of doing it. It's just easier to use the options provided by the edit window, which we'll get onto the next few videos. Okay, so also there's query, query inspector. This isn't gonna work so well on the test data db example because there isn't actually any real query happening, is just random numbers being generated all the time. So we'll see that when we use the raw data source. So excellent. Okay, subquery, same thing. More, we can duplicate a panel, so I could duplicate this one or that one. So I'll duplicate the first one is just credit a new, copy this down. They're exactly the same. I could change all its properties if I liked by using inspect, pin or Jason, something else to apply. Something else too. Okay. I can either copy to clipboard, says Copy to Clipboard now. And then if I just go up here to add panel, I can paste panel from a clipboard. So there we go. We got four panels now, two copies of something else too. There we go. I can delete that. So I removed that move. So have a look. What else Create Library panel from this option if I wanted to. Okay, so these visualizations have a legend to the right there so I can toggle that. Okay, so it's off now and I can put it back on, toggle. This back on. Excellent. Okay, so these are options you may not have picked up on yet. The vector you can press keys on your keyboard to perform a certain activities such as view. I'll go full screen. There we go. Excellent. In the next few videos, we'll start looking at more of the details about how to modify the visualizations. Excellent. 10. Dashboard Versioning: Let's look at dashboard versioning. Now. When you make changes to dashboards or you create dashboards, you have the option of saving your changes as you progress. So for example, I'm going to save this how it is right now. So dashboard where I can call it anything I like my coffee, Save and I'm going to save as a copy of the simple extreme example. This works better when you create your own copies of an existing template first or you've just credit your own from scratch. So I'm just going to sew it into the general folder. Save. Okay, so there's a new dashboard in the system now called simple streaming example copy are going to dashboards home. We'll see there's two dashboards. So now I'm going to just work on this one without affecting the original. So let's create a duplicate of this. So duplicate that, so something else. And I'm going to just change his name though. Spec pen on Jason, something else three, apply. Okay, so I'll save that change. So added something else. Save and very good. Now let's say automatically I didn't want any complicated changes that I might have made or it might have broken something which I wish I didn't. I couldn't go into dashboards, sittings up there and press versions. And here there are two versions. The original copy I made and the one I just save called out as something else three, so that's the lightest, That's the ones being used right now. So I can restore the change before that. So restore? Yes. Restored a version one. Yes. Related. Okay. So I'm now on version one. So if I go back to the example, it's just got something else and something else too. But let's just say I did change my mind and I actually want those changes again, I can always go back and reapply. So let's go back to added something else. Three, restore, yes, restore to version two. And there we go. And that's done. So back there, and that's where I saved it, added a note. So just be sure to make regular saved. So then you can go back and see if you need to. Excellent. 11. Graph Panel : Visualisation Options: Okay, Now we're going to look at the visualizations in more detail, starting with graphs and more specifically, time series graphs, there are many ways to stall your graph at all. Essentially they are graphs showing time series data. Now time series data. If I look at one of these, for example, inspect the data, consist of a timestamp and a value. So it's just timestamps values, that's time series data. And behind each and every single one of these graphs is something like that At series of times with a value. That's what we're seeing here. And perhaps even multiple time series. So for this one there's 12345 different sets of time series data in that behind this particular graph. Anyway, in your copy of Grafana, I'm on my copy there that I've credit, go to dashboards, manage. And we'll just create a new dashboard, add an empty panel, okay, so by default, it's using the default data source which we've selected as test data db, with one time series being random walk. And that's a default graph. So we can refresh that and it's random walks. I was changing every single time. Okay, so test data db is good for learning, it's just fake data. Now, if we look at the scenario, there are many scenarios in there. Not all of those are going to return time series data that can be shown in a graph like that. But the ones we can use our random walk, a random walk with error, which is same as random walk, but it actually shows some error information if you wanted to see how that works, we could also use one called CSV metric values, which hard-coded values see if you look at that, that's 12090, et cetera, that the numbers down there, we can change those. I could make that 10 and it updates the graph of medically. Or I could make it a 100, or I could even add, for example, and that's CSV metric values. That's a good option for testing. And also another one called streaming client. So streaming client is continually updating data. You can sort of see that being drawn there, but we'll zoom into it so that we can see that better. So just highlight with your mouse I section and zooming into it. And there we go. It's just updating continually. That's the streaming client. So you can use those scenarios inside the graph visualization to help you when you're working on your graphs. Doyle. Now, also, as well as zooming using the mouse option like that, you can use this drop down here to select a certain time range. Example last five minutes and we go, and we can see the last five minutes within that range or specific times now minus five minutes to now or that date? To that date. But it's just easier to do last five minutes, perhaps. Okay. So I'm going to go back to just random walk for now and look at the different options that we can use to store that graph. So on the right here, panel options, this top drop-down. In previous versions of grafana, the option to choose your visualization was part of one of the options in this section here. It's now been moved right to the top. So the first one, which is the most common you'll see is called time series, which is basically the time series graph. You're more familiar with version seven or less of Grafana. You still have an old graph visualization here that you can use. I'll show the new time series options. So that's what we have selected already time series. And these are the options for time series. If I change, that will get different options. For example, state timeline, we get different options and I go back to Time Series. So here title, I can write anything I like my panel title and it's updated there. And if I went out, it would say my panel title. Now, in the previous videos, I changed that total by using the pedal Jason there. That's a little more complicated. It's actually easier just to use the edit option there and change it there. My penalty description if you like ABCD, and that will show up here in the eye. And if you wanted to help you, it uses a little more understand what's going on. Transparent background, okay, so takes away their greatness. 10 all links. We can add a specific link. For example, I can show any links. I like that a more descriptive for the graph. For example, my website. And open a new tab if I wanted to save. And that will then show up top lift their website and shows my finer or new page, whatever URL you think is important. Mother needed repeat options. This is a more advanced subject. We'll come on to this later. There are no template variables found. We'll look at template variables later on in the course. Template variables are useful for dynamically creating graphs. Tooltip. When I hover over parts of the graph, you see that it shows the date and Ice series seventy-seven point to there, 74.4. That's the tool tip at the moment. Only have one series available, so we're only seeing one single time series in a tooltip. So all doesn't have any effect and he didn't just hides it. But in order to see all have an effect, I'll add a second time series down here by pressing query. So B, I have a new time series called BE, and I'm also going to use random walk. That's good already. If it doesn't show up, just disable and then enable again. Okay, so I'm seeing two time series up there now. I could have also just press Refresh on the dashboard to show up. Okay, So going back to tooltip mode, single is just showing whatever the closest value is. So base series a series. Now if I press or it will just show both or time series B series and series B series. So legend, the legend down here is at the left. It's default list table, okay, so it's now in rows or he didn't. Placement bottom or right. Okay. Okay. And legend values, we only have two things written there, I series, B series, but we can also show more options. So last case, I'm showing the last first a max value. There's quite a lot of values to show their distinct account. Last, First max, last verse max. So if we show the legend placement on the right, access more easy, or even as table on the right or the bottom. There are options, graph styles. So we can draw our lines in a particular style. So this will be more obvious if I zoom into a section. So I'm just going to zoom in to the y. Let me zoom in further than that. So I'm going to use this okay, now, minus one minute at a time range and then change that. Still not that obvious. So now minus 1 second, It's not that obvious. On this particular graph, I use CSV metric values, that will be easier. So down here for series, I am going to use CSV metric values and just use those default numbers it gives me, I go, so I go That's straight line between each point. Curved, stepped up and stepped differently. So the first one, the point is positioned there like that. And that one is the dots are different spot line widths. Feel opacity. So the area below the line, for example, gradient mode opacity, so it fades out a hue and a scheme. So scheme, we can set further on down. I'll come back to that right now. It's just a rainbow color. I'll use it passes a law, installs solid dashed dots. I'm going to use dashed connect null values. This is best when the time series data doesn't return any value for a particular time, but the timestamps still exists. For example, the time series might return that. And you can see here that there's something missing in the middle there or might even be null. So I'm saying, since this is random walk is just regenerating random values for series B, I'm just going to delete that for now so it's less confusing. Let me go and just refresh that. Okay, So once again, now nothing or no. Okay, So connecting null values always, okay, so the value is now connected even though there's a null in between. So it's never or threshold, so it only connects when there's a certain time periods. So let's say one hour. Let's try one minute. Now let's try 1 second. 1 really sick and I go. So the time difference is less than one milliseconds. So let me join the dots if the time between those two points is less than one millisecond, but I could say always or never, or you can put that back. Okay. Show points. Always, never, always. The points they can see the circles or never does when you hover over the point and you have the tooltip enabled and point size stack series. This is about whether you have multiple time series. So for this, I'm going to put that back to random walk and credit hours series. But instead of creating a query in selecting random walk, I'm going to duplicate. Okay, so I've got to now be a NB, refresh that. Okay, So they're both there. Okay? So stack series is off normal. The two values have been added together. So if you've got a tooltip, a series equals 8.59, B series equals 38.3. Together, they total just a little bit less than a 130 there. They are stacked and stacked. So it's showing the real value or within a 100 percent. So 100 percent, sort of a graph styles axis. The axis is automatically positioned on the left, like so, or I can have it on the right. There's my axis on the right or hidden. It's not on the layer order. Optional text. I can have anything I like, ABC. Abc, that's just describing what the axis means. Widths, auto or I can hard-code it. Let me go. Just auto is easiest. Softmax. Softmax, not sure they are anyway, Show grid lines on or off as my grid lines scale logarithmic or linear. Logarithmic base to base 10 or linear. These might be better shown if I'm using a CSV metric values. So delete B and change that to CSV metric values. So if we go down again, so logarithmic base two or base ten, let me see it like that. And just linear axis, standard options unit. Here, we can select where our units are and it's just going to write the text square meters, for example, okay, so 0 meters squared, one meter squared, and the same on the axis there, min is about showing the range of the graph. So let's say I want to show the minimum being 50 meters squared. So it starts at 50 there and the max being 60. So we're only showing just that part of the range. They are between 50 and 60. Order decimals. Add some decimals so that we can sit 1.1234520.3456790.456, for example. Show that again. So a series says 20.3, despite the value being 20.34567 down there. So here, the decimal option, I can say show me two decimal places, okay, So they're, a series is now 20, 35 meters squared. Okay, So next color scheme, classic pellet. That's the rainbow option as I was showing before. So if I just switch the style back on, so GREP style being scheme there, we can see the rainbow. This hard that go back down to color scheme. We have green, yellow, red, red, yellow, branding cetera. There are several to choose from. I could find one that you'd like moving on. And shoulds, okay, so thresholds show thresholds by offered a moment, we can turn them on as lines. So we have one threshold at 80 there. We can change that to 50, for example. And they have a visual thresholds. They are just there to help you quickly see whether something is above or below a certain number. Here my value was a 50. It could be a percentage, so it's 50 percent now, a percentage means thresholds relative to the Min and the max. So 50 percent between the Min and the max, which I could sit above, show thresholds as a field region, okay, So grain and red, I can change the color that yellow or blue, like so. Sub-regions n lines excellent in base b to grain. Excellent. Okay, So these are visual thresholds and not to be confused with alert thresholds, which we'll come on to light up query K. Value mappings. Mappings are about what takes to show when a value is at a certain point. So let's add a value mapping, add new mapping on the show a range. For example, you could use regex or special values between 20 and 50. I'm going to show the text danger, and I can set that color to being update. So the Tooltip now shows red danger down here. It's just, well, what it was before, red danger, but a series being grain, that's okay. This one, he is also a danger. Okay, So this is an example of value mappings. And there are different options there. Value, range rejects, special, I'm going to delete that, update pay. Now, data links add a link. These are a little more technical. When you click a data link, it will take some of the value that you have in your graph and send that to the URL that you're clicking. For example, tidal current value, for example, https slash slash or foreign r dot ASP code.net slash dollar. That will bring up a bunch of options that you can send data to that URL. So for example, I want to send along the series name to the URL. Sending the user to. This is if you want to create dynamic content on another website and you can use the query string to get values from so Series name and just make that a little more like a proper query strings. So S equals sum, and this is just an example. This is a lot more technical and more applicable for people who are programmers to open in a new tab. So S9 series name. So if I save that, now, when you click on a value, it then gives you a link that you just created. So I called mine current value down there could be called anything. So if I click that, it will open a new window. The URL will have this query string S9 series as an extra parameter in the URL. And you can use those extra parameters for showing up important information that might be on a different system. So another example of that, if I edit that, Let's do another one saying value equals like that. It will show you straight away or I could have just pressed. But anyway, value equals, I'll send something else across, say, the whole time range that is selected. So value and that might be just better. Same time range, save that. It's all about programming now. And whatever is important for the other system is what you were taught me. So save that. Now if I click one, there is my link which I've named current value, could have done that anything. And if I view the URL that it was sent to, it says toy range equals from two, and those times are actually the long equivalent of a time. And that can be converted to a datetime value at the end client. Anyway, I'm just showing you that these things exist. Okay, so it's data links, okay, so I don't need that and delete that. Okay, Excellent. Now show me quite a lot about graphs already. You don't have to remember all of that or really understand what exactly what all those things are doing. You can always come back to this video later or delve a little deeper into any particular subject. You want a good place to see more examples of your father graphs is the website that I showed you at the beginning of this is actually play dot Grafana.org. So this is a website built by the graphene and developers that lets you test out all the different functionalities of Grafana without worrying about actually breaking anything. So for example, if I want to know more about one of these graphs, this one, for example, thresholds here. I can either press a and edit it or I could have just press the edit option there and understand how that graph is actually put together by looking at the different properties there. Legend, graph styles, axis, different overrides, which we'll talk about in the next video. Panel options and also understand other more berries data source days graphs are using a different data source then testdata dB, we haven't sit up graphite, but you can see the CLI version has a lot of different data sources that you can understand little bit more bad as well. So we've got a lot to the bottom and we'll get to this dotted baby is random walk, refresh, refresh. Let's do a lodge a time. So there's some thresholds marked on that one. So if I've got under threshold so I can actually delete by its dies. And don't worry if you break anything on this plague or Fanon.org website, for example, I could say Apply, and I've just ruined the thresholds graph for everybody, but actually you haven't. If I refresh this website and look at the time series graphs again, scroll down. So shoals is fixed up. So there's many things as you want. And there we go. If you go to dashboard time, we can see there's a whole lot of other different types of graphs and options that you might want to look at in more detail. So for example, I can press a on that one and learn all about that one. Excellent. So anyway, in the next video, we'll look at various kinds of overrides. It can filter just by rows by pressing that option there. Excellent. 12. Graph Panel : Overrides: Okay, overrides. Let's start with a new dashboard, quite dashboard. Add an empty panel, okay, so we're using time series just here as an option or filter called overrides. We can add a field I brought in. If you just select all, you'll find that ROI at the bottom add field, I brought its assigned button. It's just all show all the options. Now, consider all these options as defaults with a field embroider down here, you can override any of those values. Now, they become more useful when you have multiple time series. When you have a single time series is not really necessary to use a field override. So I'll just demonstrate what the abroad does now, for example, with the color of the here in standard options, scroll down to color scheme, inspect a single color just here. It's default as gray. So I can press that circle and get a color palette. I'll select orange, and now the line is orange. Now I can override that. And using the override, it's not that necessary at this point in time, but just to show you that as possible, select Add Field override. And we can do feel to have name fields with names matching or rejects, hills with a type or returned by the query. The easiest thing to understand is fields with name, okay? So Fields was name, choose a series. So that's a series just over there on the left of the graph. So I'll select a series, add an override property. All these different options appear, one of them being standard options, color scheme. So they mimic all the default options of our visualization. Or select a single color, and it's gray, or I'll select blue this time. Okay, So now it's blue. The override is over-arching as blue. So despite the fact that I have selected single color here in standard options, that is now being overridden by the blue. So if I try to change that color to yellow, it has no effect or purple or anything. No effect because of the Override. At the bottom here, a series, single color, blue. So just if you're ever changing default settings in a panel and it's having no effect. It's possible that there is an override epidemic. Now there's also another way of stating this override rather than selecting these drop-downs. For this particular one, we can set the standard options color scheme. What is delete data case that's now gone back to that. You can actually click that little yellow there next to the series name and it gives you the same options. I'll select blue, okay? And what it's done to the bottom here is automatically credit a new override for us, a series of single color blue, and it's overriding the default there. Now, what if they don't make a lot of sense? If you only have one time series, they make more sense when you have multiple time series. So I'll just delete that one for now and create a new time series called B. I'll just duplicate this I here. Okay, So there's two random walks. Just refresh that so that we can see them. Okay to random walks. Now, single color, I've used yellow. If I change that color, is changing both for the time series at same time, this is when overrides become more obvious or their purposes. So series I will change that to a different color. So add a field override fields have name, choose a series. Graph styles fill opacity, all kinds of things. So all the default options, but now in another menu, standard options, color scheme, single color, yellow. Very good. I can make that small adenoma field override fields of name, choose B series id override property, standard options, color scheme, single color. And that can be a grain never go into for colors, or I can add more. I've robbed properties to this single field with the name B-series line style, for example, dashed or dotted. Okay, so very quickly That's what overrides really are about. Overrides will override any default option you have their income more useful when you have multiple time series in a single visualization. Excellent. In the next video, we'll look at transforms, select. 13. Graph Panel : Transformations: Okay, let's look at something called transforms. So let's create a brand new dashboard, empty panel, and that's the transforms tablet. Just their transformations lay to join, calculate, reorder Hoyt and rename your query results before they are visualized. Okay, so for this, we'll update our query will need to time series in order to do a transform on them. So it's duplicate a, so we have two random walks now. Very good, okay, and refresh that just so that we can see them both. So I am b, they're both random walk and I go back into transform. Now, the most common transform that you'll probably use is add field from calculation. Now by default, this will just add those values and create a third time series. So let me go. So it's done that. So it's just taken both of those rows, a series of B series and added them together. Now, we didn't actually make any decisions they are, so we can be more explicit about what we want. So instead, I will do binary option field a plus, minus multiply, divide by cetera with B. So a series, B series. And that's the same thing really. It's just take an a and B and creating a new one, which is called a Series, B series, we can call it anything we like, such as total no-go and it's been updated to be total. Now, we can also replace all fields. So you don't only just shows the total now, so total being. So I want to see all of them. That's good. We can apply that. And that's a very quick dashboard. We would up of two time series and the third one being the addition of both. Let's edit that going back into transform. And other thing we could do is if we put that back to reduce row, I can select a NB again. So we're more explicit about what we're using calculation all use the main being average. So we'll call that main this time. And that blue line is now the average of those two lines. So I could have had three time series there. Replace or fields once again just shows the one-line or all, keep them all. So apply that. Here we go. Now the blue line is showing the average so that common transforms you might like to draw, and I'll show you one more transforms. So instead of modifying this panel, Let's duplicate it. Duplicate. And then we got is two to the right. They're random walks, so they will always show random data. So every time I refresh, it's just a little bit different. It's also something that I haven't shown you yet. These five seconds here, we'll refresh every five seconds. This one. But we don't need that anyway. I said it that panel. So eight edit. Now if we look at the transform, we have the existing transform because we duplicated it from Nala panel. I'm going to add a second transform here. And that is going to be called reduce. So reduce all rows or data points to a single value using a function like max-min Menon lost. It doesn't work on the graph visualization, so we need to view it as a table. So we can press the table view there and we can see the data as a table. But actually when we cite this, for example, if I apply that now it's showing me that doesn't have a time field. So if we go back into it and instead of using time series up there, we select table, okay, it's now showing us as a table by default whether or not we have that selected or not now apply and we can say as a table anyway, let's continue to edit that. Now the transform, the options we have series two rows or reduce fields into a single value. I'm going to use series two rows. And for the calculations I want to shut the last value, the first value, the difference. And that's pretty good. We can see that. So our table on here has a lot of extra information in this is random, so it's not really that useful, but it could be useful data. There we go. We have a graph and a table alongside at the summarizes all the information that will be in the same graph. Men, he says random walk, so it's all random. So it anyway and refresh that as well. Excellent. So they are really what transformations are about. Taking something that you already have and applying some transformation over top of it to create a new set of information. Excellent. So if you want to find out more about transformations, you can visit the grafana transformations page and it's quite in depth. So it's a lot of information and taking Twitter Ada documentation this way. But I've shown you a couple of ways of using eyes. Probably the most common ways you'd use those things anyway. Okay, so excellent. 14. Stat Panel: Now let's look at the stack panel. Now, create a new dashboard at an empty panel. End up here Time Series step. Very good. So I'm using random walk, step, what it does, this shows a summary of the data in some way. So by default that's showing the last value there. That could be showing the first value in the list. Min-max, main, the main point and on, and there are several to choose from. You can also show as all values, which is showing a new step for every single value that it has up to a limit of 25 to get say, ten shammy, 10 values or five. So it's showing a new value for every single value in a time series. They're just the 1.2nd, 1014 for him at 1. I go back to 2005 and use calculate. So for example, last five minutes, There's different thing. Now I'm using random walk, so it's going to be random every time I refresh. Okay, next line here that we're seeing here. That's because we're looking at a time series. You can also use the stack panel with just a single number. For example, if I use CSV metric values, it's got several numbers there. But if I just select all values, it's showing one time series for every single number. So I could just say that just some random numbers, they're very good. So there are many options for how you look at this. So horizontal or vertical auto mode, we can show just one value or to values or one part of it, cetera, auto color mode, where the background is lit up, that looks a little nicer. Graph mode, sparkline, this graph mode, step pen all sparkline mode is about when you use a time series as your data. So let's put that back to random walk, okay, and go back to calculate. And now let's see what the sparklines doing. So it's showing that graph there, which is actually called a sparkline, where the seed or not, going back to CSV metric values, Let's come back to the default. And I'll put that back to all values. Okay, so like with all visualizations, there's options to add some string describing the unit computation teraflops. For example, MISO, area squared, decimals. If you wanted to use decimals, for example, 90.123, decimal default is one. So let's put two or three decimal places. There we go. I'll leave that as auto. Carlos came from thresholds, That's normally how you would use this. So if I go down to thresholds here, I'll add some new thresholds. Okay, So for example, let's put 5, 10, and 20, and I'll add one more thing, 30, okay, so everything five and above will be read. Everything below five will be grain. So 01 is nothing for ten. So there's no yellows, 20 and above is blue, and 30 and above is orange. Okay, so thresholds, absolute o percentage, if I change that to say everything above 50 percent is orange. So 30 is less than 50 percent, so it's blue and Ireland 20 percent, I go anyway. Okay, so value mappings, we can add value mappings like always. So I can say value five is danger. For example, update, okay, so now says danger if the value equals five, okay? So we can say that it's very similar to the graph panel, but the presentation is different. We also have overrides if you're showing multiple time series and so transforms as well if you wanted to. Excellent. So that's the step pen out very quickly. You will see the step pen all use quite a lot on any of the dashboards that you download and install. We'll see more of that as we go along. Excellent. For this one, I'm actually going to save it because the next two videos, I'll show other very similar visualizations, say, Okay, excellent narratives, new dashboard copy, Excellent. 15. Gauge Panel: Okay, So we just did the stack panel. Now I'm going to do the gauge panel, which is very similar with first I'm just going to read all of that, so it looks like that. Now I'm going to duplicate this step panel down here and edit it. Now at the top here, I'm just going to change stat to gauge. So it's now gages over that like that. And you see the gauges, the same data behind the scenes to different presentation. So you can look at these colors here as they change, as you go around. They are the thresholds. So if I look at that like that and then go down to thresholds, we can see here that when you change Absolute and percentage, you can see a bit better what's actually going on. Value mappings or the same things as before. Danger. Because I have number five down there. We can change the orientation to be horizontal. This many of them, so they're all being squashed there. I'll just change that to maximum one or maximum two. For example, 25. Calculate all values, value options, and vertical sure. Threshold levels. For example. Anyway, I'm going to turn it off, put up. Okay, so that's the gauge visualization. No, a real difference is the presentation. Okay, so I'll apply that here. Same information drawn differently. Excellent. 16. Bar Gauge Panel: All right, Let's look at another one, uses the same data, but different visualizations. So I'll duplicate whichever 100 want. Duplicate down there amine edited by pressing E. Instead of selecting gauge on this list, I'm going to select bog age. And I go, so a little mini bar graphs, horizontal or vertical. Retro LCD, basic, gradient, or trivial C, D looks good, and horizontal. Pretty good as well. Let's apply that and we'll look at is already very impressive. It's edited. See what else? We have an eigenvector, vertical, text size, standard options, thresholds to send it before. Value mappings, I've got danger. If a equals 5. And data links and overrides and transforms. Excellent. Apply that and save that. In the next video, we'll look more into the table visualization. Excellent. 17. Table Panel: Okay, so let's look at the table visualization. Now. Create a dashboard, add an empty panel up here, time series select table, okay, so it's a table for the data source. Let's change that to being random walk table. Okay, so that gives us a few extra columns that we can look at. Can we bring that across? Case this is still random each time you refresh it, okay, so me just resize the column there has actually created some overrides for us. So you can either leave them or delete them if you wanted to. And it will go back to the way it was. Individual column widths will come through was neither odd. Anyway, we'll look at overrides more in a moment. So Panel Options course, we can change the title by title and we can set a description. It's the title we wanted. Instead, transparent background. Okay, the table show a header, header. Now we'll put that back on so it makes more sense. Column widths, the minimum column widths, a 150. We can set that to being a 100. It makes them all a little bit less there. There is no override added there because it's affecting the table as a whole column width auto, we can align left, center, or right. I'll keep that auto. Column filters. Okay, so that's at the top that allows you to order by a value ascending or descending, or even select a particular bunch of values. That I'm going to clear the filter. Okay, I wanted to know for now, come back to that table. Standard options unit. We can say anything. We like, arcminutes that all the numeric values, our arc minutes. I'm going to leave that as a decimal color scheme from thresholds. Look at that in a moment. Thresholds, we can use those colors to change the style of the table. So going back to table, cell display mode, auto color text, okay, So that text is colored form a threshold color background, gradient or a solid gradient gauge, LCD gauge, or basic gauge, or adjacent view, just a darker as it appears when it's entered into the system as JSON. I'm gonna put it on color background gradient. Okay, so now we can modify the thresholds all at another threshold, okay, so in case of the day behind the scene is actually a long number which is logging into job at very large. So when you convert that date to a long number, it's actually in the millions and it's way above 83. So that's why it's showing up as yellow there. But anyway, we'll come into managing individual columns anyway. So if I want to see different colors amongst the Min and the max, for example, I can choose everything 84 and above will be read. So everything under 84 is rain or grain. I could change that to a three, for example. And for yellow, everything 84, love is yellow. If I have a number below 83, that will reorder their surplus. It, It's put it down there. So I E3 ID to play. So let me go one yellow so you can play around with that. So value mappings, for example, Locke was saying earlier, we can add value mapping for anything such as a range twin IG 3.183.3, it is broken and we can set a column to it. Okay, so we've got one broken, they're broken, they're just an example. I'm going to delete that update and DOD links. We can add a link, for example, or info and we can link to a webpage, for example, my Grafana documentation page. If I want, I can add a query string. So something equals, I can use any of these options, numeric, open a new tab, save. So now these numbers are links. And if I click on any of those, say 86, there, it opens a new browser and the query string has something equals, that's the actual number behind the cell, even though it says 86 there, that's been rounded automatically to a decimal place using the decimals there. So I can say I'm going to have 10 decimal places. For example, when you see ID 5.9685.96, etc, all it does order. Okay. So let's apply that for now. And that's a table on our dashboard. Although it does look a little bit messy, we can improve on that and we can make these effects applied to individual columns only. So let's go back into that. I'm pressing E. Okay, so let's put it back to the way it was before. Colored background auto. And I'll remove the data link. I'll keep the thresholds for now. Let's add an individual override, save for column, okay, So fields with name choose, Add an override property. Cell display mode being colored background. Okay, so the I column is now highlighted red. I'm going to do the same thing for the info column, okay, So credit you override. So add a field, override. Fields with name, choose info, override property, cell display mode, color, background, gradient. Excellent. Now actually I'll make this one an LCD. So let's go back to override one. Can I use an LCD? Okay, so it looks quite interesting. Now with the info I can show down being read, for example. So I'll go back to our Rod 2 and I'll add an override property being a value mapping. Let's look at the value mappings, their condition value down. Let's set the color as red update, okay, so when it sees down is now red, otherwise it's default grain. Okay, Excellent. Now with Min and max, Min and max to be center aligned. So I'll add another field override. This time I'll use a field names with matching rejects pay. So the regular expression for that will be a Min or max, that character in the middle there, that is a pipe. And you, you'll find that in your keyboard, on my keyboard that is next to the zed key and you press shift as a pipe carrying. Okay, so that's my override for Min and max using a regex, I'll add the overall property column alignment center, okay, so just Min and max and now centers. So you have the ability to do all kinds of overrides on individual columns. Okay, so let's apply that and we can see that. So that's already quite interesting. Looking table. Excellent, That's the table visualization reverse TO Excellent. 18. Create MySQL Data Source, Collector and Dashboard: Okay, so enough of looking at visualizations now will the test data DB, let's use a real data source. The first one will be a MySQL data source, will also need to install a collective four. And then we're going to store dashboard. Okay, So what we can do is get ourselves a dedicated MySQL server. Don't use an existing production server for now, we can create one using the digital ocean coupon familiar, or you may have a different Cloud provider setup. Okay, so looking at the diagram so far, grafana server might one is hasty to be asked her final days because.net as an IP address, I have ports 80, 443, and 22 as well. For SSH, we've been playing around with the test data DB data source. We're now moving on to the MySQL data source, which is one of many, I'm going to install a MySQL server first, then install a collector. That collective will then prepare tables in a time series format that we can use in Grafana. Okay, so first thing, let's get a MySQL server. I'm using digital ocean. I'm gonna create new droplet. Many Cloud providers will have a specialized database option. I'm not going to do that. I'm just going to create a droplet. I'm going to install a new bond to 20 minimum spec and I'm going to manually install MySQL on to that. Okay, so we've gone to $26 a month. I can put it anywhere. I like Amsterdam, for example. I'm going to use my SSH key. I already have one. You can use a password if you want. I'm going to call it MySQL, like that, create droplet. Okay, so that's the IP address are being given as a public IP copy that I'm now going to set it up in potty. That's what I use for SSH. So IP address in there, I'm going to call it MySQL. I'm going to set the authentication, my private key, because I'm using this as HK method and I'm going to change my appearance to be larger font because it's hard to see otherwise. Okay, before I open that, I'll just save that. Okay, So MySQL there it is. Sorry, I can open that up. Okay, This is my new MySQL server. I'm logging on his root, go there. So root at MySQL, this is another server. So I've got two servers now, a grafana server, that IP address and a MySQL server at that IP address. To going to update my documentation, there we go. Very good. I need to have open port 22 and 33 zeros six, but more about that after, okay, first thing you normally do and you get anybody's server is update the ABT cache so it knows about the latest packages. Okay, now we're going to install MySQL server, that server. So sudo APT install MySQL server. Now I go now, press Yes. Okay, Now I'm creating a brand new MySQL server. I don't want to use a production server just yet. I can do that later. So I recommend not using our production server. Just use something that we can experiment with and then delete when you're finished with it. Okay, that's done. Next thing I'm going to do is run this command here. This is a tool that we can use to make sure our SQL Server is locked down in a secure manner. Press Enter. I'm not going to use the validate password option because I'm just teaching here, I want the password to be nice and simple. If you want really super-complicated passwords, press yes for this, I'm just gonna press no because it will create problems while learning. So any other key for now? So N, for example, I'm just going to create a default password and that is just password. Is SWOT. Nothing complicated. And they remove anonymous users? Yes. The sell-out root remotely? Yes. Removed Test database? Yes. Reload privileged tables? Yes. All done. It's ready. Let's check its status. Very good. Active running. Now, it's important to know what version of just install to. So for that, you can run MySQL, wife and capital V like that version 8. That's important to know because we will use at the moment, okay, So we don't have the server, MySQL server, that's my IP address. Your IP address is going to be different. Next thing I'm gonna do is I need to install a collector. All data sources will have some kind of collection process going on, preparing data from the data source and saving it into a format Grafana can quickly retrieve from. That will usually be a table of some sort that contains rows with a timestamp and one or more values. Okay, so for this MySQL data source, there are several options that we can choose from. I'm going to use a dashboard and collector that you can get from the Grafana dashboards website. Okay, So the dashboard or use will be the popular my two SQL simple dashboard from this link here. So open that this is the Grafana dashboards page. We scroll down, I have MySQL data source selected from this list of many, and it's this one here to MySQL, simple dashboard. There are several others that you can. Yup, but I always found this one to be the best case. So there's some sample images here of what it will look like when we're finished. Okay? Its dependencies are minimum Grafana 8.1.2, and we use these visualizations down here. Now, this dashboard requires a specific collector, and you can see that written here in the reviews section. Down here, you must use a collector from there. So I have a link to that also. The collector will be from here. That's the collector will install. It's on GitHub and I'm using version 8, so all need to download this script here, my two underscore ID, SQL. If you're using version five of MySQL, then download that. So on my documentation, I've prepared a W get command here and you can copy that. And that's downloading the My to underscore 80 SQL on my MySQL server. Right-click, so pastes, okay, So W get raw, GitHub, user, et cetera, my collector master, my two AD SQL enter. And that's downloaded a 100 percent very fast. We need to edit that file. So sudo nano or using that over that nano is a text editor for lawn us sitting in my 280 SQL. Now, if we scroll down, what it's actually doing is creating a database called mitosis. It will create some tables in that database where it will store statistics about our SQL Server. And these are the commands that will create and run. If you are good at SQL, you might find this very informative, but if you don't know SQL, then you don't really need to know this to understand profile. But what the collector is essentially made from is a stored procedure which will collect daily steps. And it will run every ten minutes. Now right at the end, we need to uncomment these last three lines. We need to create a user called mitosis at, that's a wildcard for this server identified by anything you like. This is the password I'm just going to give us simple I'm asking is the word password grant all that means all permissions on my two dot-dot-dot means all the tables in the database to the user. My two at wildcard disservice because the mod2 user is going to be querying global statistics about the MySQL server. It also needs to have select permission on the performance schema database, all tables to my two same user. Now, this is all about the collector, so Control X to save that, yes, what I've just done was installed the scripts for the collector. When we run the script and we'll credit store procedure. They will run every ten minutes running a command called Show Global Status and store that data into a table Time Series fashion so that we can create that using graphene. Now the last three lines we uncommented were about creating this user called Y2 just here. And it has a grant select permission on everything in performance schema. We don't need to run that script. We can run that script using MySQL, so copy that line just there. And on your server, MySQL run my two AD SQL press Enter case that's just run that MySQL script that we just edited using nano and it would accredit a new user. We can now open a MySQL prompt and do some simple tests. So just type MySQL like that enter and it should present you with a MySQL prompt that greater than sign with the MySQL, that's the MySQL prompt. We can now start typing SQL commands. So the first one we'll do show databases, Show Databases, and we have several databases on this SQL Server. The new one we just credit from the script is called Y2 has several tables that our collective will be using and graphene or will be reading from. There's also the Performance Schema tab that already exist on the MySQL server. But we've credited that my two user, which can read from the performance schema and summarize information into the table so that my two user and the Y2 table are named the same thing. But just be aware that this is the database called mitosis. And there's a user that we also created called Y2 with select permissions on the performance schema, ok. Now part of what's important about the collector that SQL, we'll use an event scheduler to run that every 10 minutes. So copy that line, paste it in, show variables where variable name equals event scheduler. If it says on, that is good. If it says off like it used to in the older versions of MySQL then would have to do some things to enable it. We don't have to do that. Okay, so the next line, let's have a look at some information about our users in our database. Please select host user for MySQL dot USA. Okay, So we have some inbuilt uses the SQL will use, and we also have our own one that we credit to hold my two. Then my two is the specific user that the collector will use to read that performance schema and summarize it into the motto that will query and Grafana. Next, let's look at the mighty database. So use my 2D, we are switching to the database. Let's do show tables, finish with a semicolon, okay, So there's two tables in it called current and status. The collector will be saving data into those two tables. So let's see what there is now. Select all from current finish with a semicolon and there's some data there. So can do the same thing, slipped off from status. There we go. And there's some data in L2. Now, if you want to exit the MySQL prompt, just type quit like that. And so we're back into the normal Ubuntu Bash prompt now. Okay, so that's very good. So we now have the collector running and there's data there. We can now go into Grafana and try and set up a data source. It's not going to work completely, but it's good to see what kind of problems will have so that we know how to fix them. Okay, So in Grafana, I'm going to go down here to configuration data sources. And I'm going to add a data source, scroll down until I find MySQL select. Okay, So this is what the test data dB, but instead, this is about connecting to our MySQL server. So the host will be the IP address of MySQL server colon 3306. The IP address was that host colon 3306. Now these things aren't going to work initially, but we'll just show you what those problems will be. The database snobby connecting two will be called my two. Now that's the database that we just created. We're going to run that script and our collective will be saving data into that Y2 database. The user that we'll use is called grafana with a password. I'm going to keep it simple. I'm going to use the word password. And we can see that if I click that, so that's the simple password I'm using. You can use something a lot more complicated, okay, now, this user doesn't yet exist on our SQL Server, so we'll create that also, depending on where you go your SQL Server from and how we've installed it. We won't be able to connect on port 3306, and we'll get into that also. Let's just save and test and see what we get. Okay, so data source outdated, but we've got a problem. Connecting to the server will resolve that case. There are several things to do so, one of those is to create a user called Grafana on that server. Now I'm creating a new user because it's advice too. So Grafana does not validate the queries are safe. So queries can contain any SQL statement. For example, use our database or drop tables or do any other malicious thing and use Grafana as a vector. So what you do is you create a specific user with minimum permissions. For example, select only on a specified database and table so you want to query. So I could also use the My to user, but the My to use R has advanced permissions because you can also read from the performance schema table and also insert data. So I'm creating a specific user call grafana with only select permissions and select permissions on all the tables in the database. So first thing we'll do is create our user. So going back onto our MySQL server, on my MySQL server, I'm going to log back into the MySQL prompt. So type in MySQL, enter like that and we'll create a user. So create user Grafana at, identified by password. Now we need to know what the IP address is of our grafana server. So I'm going to paste that there. And the IP address of my grafana server is this value less saved earlier, okay, so I'm creating a new user called Grafana at that IP address. So that will be used for connecting to the MySQL server from my grafana server. And that's the username that MySQL will expect. And that's the part that I'm using is very simple as possible. And that's the password that I entered here before. So identified by password, press enter, so we have a new user. Now, I'm going to grant select on that user to all the tables in the Y2 database so that user or foreign are at 1, 4, 2. Okay, So I'm allowing that are falling user to read all the tables in the database, press Enter, flush privileges. And before we go, we can verify that user exists by typing in select host user from MySQL user. And then we go, we have a new user called Grafana, and that's the host that it will be connected from. So quit. Now, that's not all the problems solved yet. Let's try connecting again to see what error we get now in the data source. So save and test was updated, but we still have a problem. Now, by default when you install MySQL, it won't allow external connections. So my server, that IP address isn't able to connect to the running MySQL process, okay, So to a layer of my connections when into open D MySQL configuration file that they will use that command sudo nano ADC, MySQL, my cnf, scroll down. And at this section here, this will bind MySQL to all IP addresses on your server, being the external IP address as well as local host. Because right now by default is only bound to localhost. Okay, So save that control X, Y for yes, Enter. Now to restart MySQL. So copy that sudo service MySQL. Restart. Okay, let's check its status. Very good. Active running Control C to get out of there. Now, if we try again to connect using the MySQL data source, it should be okay, save and test database connection. Okay, So when connecting to external data sources, you're going to have lots of different issues, right? And so you can activity and permissions. So on my MySQL server, I have an IP address, I have port 22 and 33 zeros six or reopen. I'm using an unrestricted Ubuntu Server with no pore blocking by default. So 3306 weeks already, I've created a user called Grafana at UE IP address of my grafana server, because that's how SQL is going to see the connection. I've also bound MySQL to 0000, 0000 so that it also binds to the external IP address. Now if you're using AWS or any other Cloud provider, port 3306 will probably also need to be opened in your security group settings. However, you do this in your Cloud provider. I didn't need to do that. Why digitalization server? Because it's already opened by default. Okay, so that's good. We can get out of that now. If we go to the Explore tab, click that and select the MySQL data source at the top, we will now be on a run queries on that my two table. Now that MySQL query wizard here is quite hard to understand at first. So instead, go straight into the edit SQL option, if you're familiar with SQL, statement, will make some sense to you. But anyway, I'm not going to run that. What I want to run is this section here, so copy that and replace that. And now run query. And there we go, we start to get some data. So we're reading data from my two dot status, where variable name equals threads connected and order by time ascending. We don't have to really understand what's going on there yet, but you will understand that later anyway, that just verifies that we're connecting to the MySQL database through the MySQL data source. The next part is about creating the dashboard. So dashboards manage. We're going to import a dashboard. We're going to import the dashboard from Grafana. Okay, so back on the grafana Lab's website, vol to MySQL simple dashboard. There is an ID just here. Get this dashboard. So copy that to clipboard 991. Go back into this Grafana page here, type in the ID or paste and load. Okay, so it's found to MySQL simple dashboard folder general, we can create around follows if we wanted to. I'm not going to do that. Select a default data source being MySQL and now import, okay, so we now have an import, the web just downloaded from Grafana, which is built specifically for that collector that we just installed and setup in MySQL. And we've created the users, we've opened the appropriate ports, we've made the appropriate changes to MySQL to allow the external connection, and we can now start getting data. Now, the collector runs every 10 minutes, so we're not going to see a real lot for now. Every 10 minutes there'll be a new update to these graphs. So what I'll do is pause my recording and come back in an hour and we'll see some more data written here about our MySQL server. Excellent, Okay, So my MySQL data source, as they'll be running for about 2.5 hours. And I set that to three hours, so we'll see it. Okay, So also note that my database server that I just installed isn't working very hard. It's not a production server. So there isn't really much to see here is not working hard at all. Anyway, with each of these visualizations, we can look into those and its spectrum a little more to see what they are. For example, I can press a on this threads and errors here. And if I scroll up, that's the query that refinery is using, which is worth being aware of. There are two queries running in here. I am be through it's connected threads running a few other particular properties from the data source we saw on the one single graph. So you can see how that is put together. Another thing here, where a graph has been hard-coded to a 180 days despite me selecting three, I was up there. So if I edit that one and if I click query options, I can change that to a 180 days to be two days. For example, there isn't two days of data, so it's still not that interesting to see, but I could apply that. And now it says last two days, there's another one down here, last 14 days. This is a heatmap and will become more interesting as it fills up. Let's change that to two days as well just to see. Okay, so apply that. So it's now showing today's, this will look more interesting when it's been running for many days. And especially for was actually a production MySQL database use via website or other application anyway. So I hope you can see from all that, that setting up a dashboard is actually a very complicated process. We will set up many dashboards. D MySQL was the first one. There'll be more. As we go across. The important thing about setting up dashboards, grafana is that you need to consider that the grafana server is going to have potentially privileged access to the data source you're connecting to. So you need to manage permissions and security so that Grafana can't be used as a vector to steal data or even destroy data. So that was important that I had set up appropriate users and permissions and IP restrictions, which leads me on to setting up IP tables rules now for my server on 3306, since I don't have a dedicated firewall on my server, I'm going to manage access to port 3306 using IP tables. Okay, so I'm on my MySQL server, I'm just going to type in IP tables. I have an alkane. There are no dedicated rules on here, so I'm going to create one for port 3306. Okay, so if I scroll to the bottom there, IP tables, important source of foreign or ASP code.net, that's my server destination port 3306. So what this rule will do is accept incoming connections to the server from a server with the IP address of graphene is because dotnet. So if I enter that in, if I do IP tables L, Again, it has updated that domain name to be the actual IP address. So I could have actually just topped the IP address in there, but it doesn't matter. You can talk to domain of your grafana server or the IP address of it. That is my IP address and that is my domain. Anyway, the other thing, I should drop all other connections to port 3306 like that IP tables, okay, now let's check that IP tables L, Okay, so I'm accepting connections to port 3306. It's replaced that with MySQL, just there automatically and dropping everything else. So the only server that can remotely connect to my MySQL server is microphone or so. So these are things you should consider. Now with the username, it's especially important that that user that is connecting to the server has only read-only permissions so that I can't do escalator commands such as drop database, drop table, or read data from tables and databases. So these are things to consider anyway, this was a long video. A lot of steps involved in getting a MySQL data source to work. I'm using one kind of dashboard and that's called the my two simple dashboard. And you will find out the dashboards for MySQL throughout the internet, and they'll all have a different process for setting them up. You can create a whole set up yourself. But this example has shown that it's quite a large jump and skill. You need to have some very good MySQL knowledge to be able to create a dashboard from the ground up for a MySQL server. And that is the same for any data source you connect to it. So that's why it's important understand that refiner doesn't actually exist by itself, despite the refineries actually promoted as a magic tool for everything, getting it to work properly does require in-depth knowledge of the data source that you're connecting to. Anyway, we'll move on to other data sources and that will start to make more sense. Now, if you didn't understand all the things in this video, it doesn't matter. You have the video forever. It's a long video and you can move on to other data sources and come back to that when you've had some time away from it. Excellent. In the next few videos, we'll do some more MySQL examples. Excellent. 19. Create a Custom MySQL Time Series Query: Okay, so we're gonna create a custom MySQL time series query. And we're going to use the same data collector that we installed when we set up the MySQL data source in the last video. So this is the diagram that is the collector there. It is being triggered by the event scheduler every 10 minutes and it's running a command Show Global Status plus a few other things. And it's saving that data into my two status and my two cart. What we'll do is create a custom query that will read data from the My to status table. Let's have a look at them. Y2 status table. So I've SSH onto my MySQL server. I'm just going to create the MySQL prompt down just by talking MySQL, I now have the MySQL prompt. I'm going to use Y2 database, use my two and you finish off all your commands with a semicolon. Okay, so database change. Now if I do show tables and we go now, I'll do a simple query on status. Select all from my two dot status and all limited just to 10 rows for now. Okay, so that's a small section of the status table. I'm going to reorder that. So I'm getting the most recent first and the top 10 there. So order by timess t is my time colon descending. So I get the most recent first. So that's just now. And these are the last 10 rows that were saved into that table every 10 minutes that information is going to be new. The case this table can be directly reading Grafana through the MySQL data source because it has at least a time column, at least one column for values. So doesn't matter what the name of the column is, we can work with that. But the important thing is is a date that it can retrieve plus any value. But this table also has a name for each of the values. And these names can be used to group the data into series. So the series allows you to graph multiple lines on the same visualization. We'll call them Y2 dashboard. And we look at, say, these threads and arrows here, threads, connected, threads running of water clients, they are just different series and we'll find those variable name if I search for k. So there were thousands of rows now that have been written for the last 24 hours. And we can see that information being shown these graphs. Okay, so now that I'm happy you got a table that I can query in Grafana or a, there's a table with a time column and a value at minimum, I can open up the grafana Explore tab here and query that table directly through the data source. So I have MySQL selected. I'm gonna go into Edit SQL and I'm given a template that I'll need to modify for my own needs. Okay, so these less than and greater than symbols mean that I need to change this with the name of the column, so the name of my time column in the SQL, in the end data source is timess t. So I'm going to use that up here. I can replace that with time as times seconds. Now, refiner will use that variable name internally, but we're saying pull the data from that column. Okay, So we also have the time filter down here, which also needs the name of the time column timess t. The time filter is distinct here, or demonstrate that more in a moment. Also ordered by time column, also needs to be changed as well. Okay, so that's our time stamp column, value column, that's our value column there. And it's called a variable value. And we're gonna use that, okay, So value column, variable value as value. Now the series name column as metric, I'm going to use that column as the series, which is variable name, like that from table name being my two dots status. Okay, so I'm querying my two status there. Now it's going to limit the amount of rows that it returns using this weird statement here, where time filter, time column, it will use the values from there and pass that to the database. Okay, so that query has been successfully run now and we have data showing up down here. I can change how much data is returned by changing that. So there's not really anything the last five minutes, but there is something for the last 15 minutes. Now if you're familiar with SQL, that looks like in SQL statement, but that statement isn't actually run at the end data source. What is actually one as degenerate it SQL here. So we can see here the actual SQL command that is passed across the network and executed on the MySQL server. Now, if we look at this line here where time st between that number and that number. And if I change the time filter here, we'll see those numbers actually changed. So take note of those numbers now, if I change that to last 30 minutes, the numbers have actually changed our change it again to last five minutes, the numbers have changed. So if I do last 15 minutes, and if I copy that, I can actually run that on the MySQL server directly. So I'm on the MySQL server. If I right-click to paste and just finish that off with a semicolon. It's actually returning the same data that grafana is using to write this table just down here. Now, another thing, or hard that now it's drawing that data as a table even though I have time series selected there. So whether I have table or time series, it's the same thing. In order for the time series to be drawn as a graph, variable value has to be treated as a number. So right now is trading variable value as a string. So the quickest way to convert a string that looks like a number to a number is to add a plus 0 at the end like that. So we're just adding 0 to whatever variable value was in behind the scenes that will convert that to a number. Now if I click out of that, is running, the query is now drawing it as a graph. Getting a whole lot of series coming back because we're selecting all the series or variable names in the query. So we're getting a lot of data. That's too much really to show. If I just scroll that we can see there are many, many, many, many examples. I want to just limit the amount of metrics that are coming back or series to just a few, such as threads created, threads, connected threads running. I can modify the SQL statement here, the grafana side SQL statement by adding a few more conditions to what our camera turn. So on my documentation here, copy this line that's highlighted in yellow and paste that in. So where the time filter is, whatever selected up there, the variable name in threads cached connected threads running or threats credits. So let's click out of that, run the query. Okay, so now only getting four metrics or four series being returned from all that data in the Y2 status table. And it all fits within the time range that I have selected here, which is last 15 minutes. So it's format as a table again and see what the table data looks like. You can see different numbers, different metric to replace variable name as metric. So that's why it says metric there. Here, variable value as value. So we're saying value there and the time SD is time second, there we go. There the column names that Grafana will use internally when it's creating the graph like that. Now, my server isn't very busy. That's why the graph doesn't look very exciting. If I move that down to last 24 hours, that's a little more interesting. So anyway, you can see that it generated SQL now looks like that with a time range being between that number and that number. If you actually want to know what that number actually means, Arthur is converted to a daytime. You can copy that, go to your favorite search engine and type in something long, long to date time. And you'll get an epoch converter. You can paste that in and press that, and that's that number. Convert it to a datetime string. Okay, so now we have a query that we can work with, and I've used the Explore tab to create that query. Explore tab is good because you can try all kinds of things, make mistakes, and fix them up. We'll go backwards or forwards until you're satisfied that you have a query you like. So I'm satisfied that query is called, so I'm gonna copy that. I'm just going to create a new dashboard so I don't ruin my dashboard from the last video. Add an empty panel. I've got time series selected here. I'm going to select MySQL, going to go into Edit SQL mode, select all paste, and then just click out of that so that, uh, bonds, okay, So I can now modify the styles of my graph to be whatever I like. I can say that, I'm happy with that so far. I'm going to apply that. And then we go, my new dashboard has a graph credit for my custom MySQL query that I've created using the Explore option here. So that's just a start. You have to start somewhere. I can now start creating a dashboard that suits my needs based on the information that is being saved into the O2 status table using that collector. You don't have to use that collector if you want. But it's actually got a whole lot of data that is very useful already. Just be aware that whatever table you read from a MySQL was going to go backwards, needs to have a time column and a value column. And if it has something that you can use for the metric name or the series now, then that's even better. And that's what I'm showing here. I'm just showing those metrics or series, those values at those timestamps. Okay, so I'm going to save that, so that pre and go into the dashboard, the DJ set for 24 hours. For example. You can look at all of these visualizations to find out the query behind them to have a better idea of how it was put together. For example, db cashier, press E to look at it, and that is the query. So that's a little more sophisticated than the query that I wrote. It's using a group by clause, and it's got quite a few conditions on what it should return with those variable names being those. If we go backwards, there is the heat map, which is also a much more sophisticated query that's using aggregates such as some in-group bias. Well, so as you can see, it'll get gradually more complicated the more that you want from your visualization, but you have to start somewhere. And what I've demonstrated is really something very similar to this one here, threads and errors, except I've got a few extra variables that I'm clicking there and I'm not using a second query, I'm just using one. And the style of my graph is also different. This leave that back into dashboards and click it again, Dashboard 3, and that's it. And you can always edit that and modify your query here or take it out, copy and go into the Explore, Edit, select All, paste and modify it or tweak it using the Explore tab. Excellence. So that's a custom MySQL query where we're pulling data manually from our MySQL database. I just happen to be using the O2 status table because that's being populated by our collector that we stored in the last video. So I already have something that I can use, but you don't have to be reading from those tables. You can read from any type you like provided that is something that it has a timestamp in it and a value at minimum. Anyway, in the next video, I'll show you how to expand on that and graph data from a table that doesn't have a timestamp anyway, next video, Excellent. 20. Graphing Non Time Series SQL Data in Grafana: Like I said, now, I'm going to show you how to graph non time-series SQL data being Grafana, okay, So non time-series data is data that doesn't have timestamps. So for example, in the last video on my MySQL server here, logging into MySQL. If I select all from Y2 status ordered by time st, descending limit 10, all these values have a timestamp. I guess this is from the last video. In this video I'm gonna show you how to graph this kind of data where the data is just names and values. There's no timestamp there and you'll find a title slide isn't databases where they are built just to be a summary of statistics, for example, there's nothing sophisticated about it. It's just rows and values. Okay, So normally graphene is used for graphing timestamped data, but I'll show you how to do something like this, okay, so to demonstrate this, log on to your MySQL and we'll create a simple database called example db. So copy that line, then copy the greater than sign, obviously snows signs to indicate that these are SQL statements throughout my documentation. So I'm already logged into my MySQL there, right-click Create database example db. Ok, so now show databases and I have a new database called example db. Okay, now we can add a table to that. First, let's create that table, so I'll just copy that section. They're not the arrow. This is a create table command in SQL, you'll create a table called simple table on example db with an ID username and total columns. Until now let's just fill it with some data so that we have something to query. Insert into example db, simple table, username, total columns, just these values. It's all hypothetical. Data is just being made up. Any resemblance to anything real is purely accidental. And coincidentally, I've just made it up because we need some data to query. We can check that data exists. Select all from simple table there and there we go. That table exists. Neither doesn't have timestamps. So we can't visualize that and grow minus right away now. And but when we credit the my two collector, we had to give our users a finer and finer server's IP address, select permissions. Well, that user won't have select permissions on example db, simple table, so let's add that now. So we'll grant select on this particular type to my user. And I just have to update the IP address to being that of microphone or server, which is that now press Enter. That is because it's microphone or server that will be making the connection to the SQL database using a user called Grafana from that IP address. So that will be the host that MySQL is allowing, flush privileges, life privileges, and we can quit. Excellent. Now let's go into Grafana and go to the Explore tab. Explore, select MySQL, which had already is going to edit SQL and replace the template that has given us with this. So missing Control a to select all and Control V to paste, click out of that case. So if we do table, we can see the actual table data now in graphene and they're the values down the right there. We can't see that as time series because there is no time caught. So we need to invent a time column. And to do that, we can add an extra line called now as TA1 second. So up here, select now as time second. Now is an SQL function that tells you what the current date is and we're returning it as the column times sick. So if I now run that query as a table again, we seeing a time column there. Now that time column is just now, so it's always going to be the most recent Time, which is now. So if I run that again, every time you run it, the seconds and milliseconds on a sense will be updated. Okay, so that's a trick, but the problem with that is you can't really grasp it as a default graph in fauna, we can see the series names down there, but profiler isn't really able to show us what's happening exactly right now. So not on this graph anyway. So just copy this section and we'll go into our last dashboard that we created from the last video for dashboard time, that was new dashboard copy three, that is the custom query there a credit in the last video for you that for 24 hours, it looks a little more interesting. Add panel at an empty panel, MySQL edit SQL Control a to select all control V to paste, click out of that. Now we have time series, split it up there, change that to bar gauge, and I go and that is a graph of the values in my simple table there in the database. You're looking at the documentation. There it is. That's the data that I've put into the database using the insert command. Now, not only can we use the bug age, we can also use the stat, the gauge, and also the pie chart. So there are many options they on how you want to display that. I'm going to use the bar gauge and select Horizontal, for example, retro LCD, that looks pretty good or vertical. I can also use thresholds, so I'll let a few thresholds and change their orders. So for example, that can be 20, that can be 30, that can be 40, and that can be, for example, a flaw that, and there we go. That's a bar graph of my simple table in the database. Now if I set that to refresh it five seconds. So it's asking the database every five seconds now for the latest data, I can log into my MySQL again, and I can update one of these values using an SQL statement here on the bottom they update example db, simple table SET total equals 50, where username equals koala will enter that SQL statement. I'm just right-clicking. So it's going to say total equals 50, where username equals colon, that's the koala column names currently 26 and so on. And in a moment that'll be 50. Okay, there we go. So that's updating every five seconds. We can change that again to being something like a 150. There we go. Let's put it down to something else like 50 and change different one. And let's try changing emu to being 50 as well as MU. Now to the 5 second update, it's updated. So you might have tables like that and you might want to grab them and they might not be timestamped, but you can still show them in graphene or if you need to anywhere else that's non time series data, then you'd have lots options on the presentation. Excellent. 21. Install Loki Binary and Start as a Service: Okay, So now we're gonna look at the loci data source. The data source is about reading log files from those servers. Many servers and applications will store log files in a file that you can often just read in a text editor, such as web servers will do it. Database servers, couple of balls, a journal system day on Linux is a good source of information that we can also read to use it lucky data source, we can install two extra services that work together, both written by final labs. The first one being the monkey service, is what we will install in this video. And the lucky service, if I go to the final lucky GitHub page, is a process that will run on your server and it's responsible storing logs and processing queries. So it's a bit like in SQL Server for log files. So the grafana loci data source that will set up, we'll connect to the loci process running on your server. Now like it doesn't exist by itself, something needs to be pushing data into it. And we'll use prom tail for that and we'll discuss that in the next video. Prom tail will read log files that you've asked it to and then send them off to Loci. Loci can store them and organize them in such a way for querying by Grafana anyway. So in Grafana, if you go to data sources and you add a data source down here, there's, the loci data source says was like Prometheus. But for logs, we haven't done Prometheus yet in this course, but we will anyway, for we can use that data source. We need to set up a lucky service that the likey data source will connect soon, similar to what we did when we set up MySQL. Okay, so back at this diagram, we're now going to install loci service locally on the grafana server. So when the perspective of the grafana application, the loci service will be at 127 dot 000 001, which is the same as localhost. So on what documentation down here, we can install the loci binary and we'll set that up and that's these instructions here. So log onto your grafana server. I'm now a microphone or server route at Grafana there. And I'm going to change to a folder user local bin. So that's where all installed a low-key binary. So city user local bin on now and user local bin folder, I'm now going to download using curl, a zip file contain a low-key monitoring. There'll be a version 2.4.1. Now loci is like a finer, it is updated regularly, but if you want to see what the latest version is, you can visit that link there. It takes you to the grafana loci repository, the releases page. And we can see 2.4.1, so we will install that. So copy that whole line. I'm going to copy that to Clipboard these narrow photon now, right-click enter. Okay, So it's just downloaded the loci Linux AMD 64 zip and saved it into the user local bin folder, okay, for this type ls or sit there, now we have to unzip it. So unzip loci, one example is 64. I don't have unzip on my computer. I can install it quickly display, piloting that, right-clicking and Enter. I could have just typed on the keyboard. It's run that unzip again. So I'm just pressing the up arrow because it shows me what I talked in previously. So unzip, okay, inflating ls, there are two files now, loci Linux, AMD 64 end like yellowness, AMD 64. So if I do ls, l height h, It shows me that that file there has executed permissions. So this is good. Sometimes they don't have execute permissions. If they don't, you can just run CH mod a plus x, the name of the file, and it will show your file as being executable. It's important that the file is executable and it already is for us. So this is previously in older versions of loci, you would have to manually make that fall executable. Hence, that's why I have that documentation still just in case. So before we can start like, we need to do several things. One of those is to credit convict falls. So it's quite a config file using nano. So I'm just gonna copy that line there. Sudo nano config YAML. Okay, So is opened up a blank page and it's already created that file for us but does nothing. So let's put something in, copy this text below, or just press that icon. And if I right-click, it, pastes it all into the nano editor there. Now this is a default loci configuration. I'm using version 2.41 and I got that from this official Grafana loci link here on GitHub if you're using a newer version and 2.4.1, be sure to check that link to see if there's anything different in the configuration file. That's if you have problems, Joe Citizen case. Okay. So back into nano, but save that. So Control X, save modified by far, yes. So I'm pressing Y for yes, press enter. Very good if I press ls now, there are three false as the configuration is the loci binary, which is executable. And does it fall? We no longer need does it fall, but I'll just leave it there anyway so we can start loci now. But it's not really good idea because it requires SSH session are lucky, service will stop. So what we should do is set it up to run as a service so that it continues to run in the background because we wanted to run 24 hours a day. So what I'm gonna do is create a system user called loci. So copy that. And that will be the user that will run the login process pseudo, use ad system, loci and so on. I'm now going to create a file called loci service, a copy that using nano again sudo nano, ETC, system D, system loci, dock service, press Enter. Okay, So it's a new empty file, this file for us. And in that paste, this text. This allows our loci Linux amd fall that we just created through one as a background service on our server. And that's the configuration file that is using user local bin, config loci, or just move the cursor along there. We'll see that it's config loci dot YML. Excellent. Also, you'll see that it's using a user loci. We described that user control X, yes, Enter. We can now start and stop loci using these commands. So sudo service loci start. Okay, we can check its status and it's active running. So we go loci is now running as a service on my grafana server. I could always stop it if I wanted to. I'm not gonna do that. But you can if you need to. Now, since we have loci running, we can connect to that using graphene, a morally on the data source configuration page, I'm going to select loci and I go name loci, that's a good name. We're going to connect to local host 300100. Or you could even use 1270013100 and it's http. And that's from the perspective of equi-affine replication. So it's just another service running on the same server listening on port 3, 100. Okay, so save and test data source connected and labels that. Excellent. Okay, we don't have any data inside loci yet because we haven't set up prompt. I will do that in the next video. But for now, we can at least go into the Explore tab. Then we can select it from the drop-down there. Loci know logs found, doesn't matter. We'll get onto that. Now. One thing to note there, I'm using the Digital Ocean Service, so I don't have a default firewall blocking ports. So I can actually access that Loki service across the internet. And that address is HTTP, fine art dice beco dotnet colon 3, 1, 0, 0. Loki is listening on port 3 100, but it's also accessible across the Internet for me. So that'll be your domain name. If you use a domain name or Yorker finest server's IP address, that is my ones. So if I press that, I can see that there is actually a web server running there because it's returned a fluorophore. That's what web servers do. But if you just type slash metrics, it will return this data, which are statistics about the loci service. Now, you probably don't want that to be exposed on the internet like that. If you're using AWS, security group wine have 31 a 100 open already. But since I'm using an unrestricted 12 server and I don't have a dedicated firewall. I'm going to block port 31, 100 using IP tables. So that's down here. Ip tables, I'm going to accept 3100 on local host only because we're fine or service needs to still query the loci service. So Obama grafana server doesn't matter what folder I'm in. Ip tables in port TCP, localhost, destination port 31, 100, accept. Until now I'm going to drop everything else. So no other IP address will be able to connect to port 3100 and IP tables import TCP destination to a 100. Drop. That line means drop everything else. Hey, I can verify that or IP tables. And they are my rules. Okay, So 17 localhost sum3 were a 100, dropping everything else. Okay, so excellent. We have a low-key service running on our server and other thing, it's also exposing port 9096. It uses that for gRPC communications, for internal management, okay, So that port is also going to be accessible across the Internet if you're using a similar setup to me. For example, I'm on my Windows machine and I have a program called Telnet installed and I can tell it to Grafana dot ASP code dotnet port 90, 96. And we can see that it's actually connected. So I'm going to close that port as well, just get out there by closing it. So in my documentation, I credit the rules here to allow 90, 96 on localhost. So I'm accepting source localhost, destination port 90, 96, that's okay, but dropping everything else in order to verify that. So there we go. So accepting 90, 96 from localhost and dropping everywhere else. Okay, So another tool that you can use to check what pores or services using is the SS command on Ubuntu 2104 and above it's very similar to the netstat command. Here I'm going to return results with the word loci in them. So enter that here. It's word wrap, so it's quite hard to see, but you can see that the loci Linux AMD is using port 90, 96, and 3100. Okay, so if you're going to have these services running on your servers, you will need to ensure that you're not exposing information accidentally, Okay, excellent. And also be sure to read more information on keeping Rawls persistently if you're using IP tables, I'm going to create a backup of my IP tables rules, and I'm only using IP version four, so I don't need to run that line. Excellent. So the next video, we'll set up the prompt L service to read log files. Excellent. 22. Install Promtail Binary and Start as a Service: Okay, so the next part so that we can query through our lucky data source is to install a collector for the loci service and we'll use prompt. You'll often see prompt OWL, and like he used to get our case, let's install the prompt L service on our grafana server as well. Okay, so we can get prompt tail on the same place that we got low-key. That's the loci releases where page. So if I open that, it's currently 2.4.1 for me, if I scroll down, I can see the prompt tail related binaries and all be installing prompt WHO likes AMD 64 because as soon as the architecture of my Linux machine, sure, you're in that user local bin folder already. I already am user local bin pay. So copy that line there where downloading prompt OWL Linux, AMD 64 from graphene or loci releases version 2, 4, 1, and 2. Okay, so if I type ls will see in their prom tail Linux, AMD 64 zip, unzip it, unzip prompt on exactly 64 Zip. Okay, so that's inflated ls, l h, Okay, so ProM tail lights, AMD 64 or he has execute permissions. Excellent. If not, you can run that line there. Okay, we now need to create the config file for our prompt tail. So sudo nano config prompt hail YAML. Let's add this. Copy to Clipboard. Okay, right-click, so pastes it. So it's going to listen on port 8080. It's also going to create a gRPC port. And 0 means point to any port, which means it's going to be quite hard for me to block their portfolio 12. So I'm going to explicitly put it on a different port number being 9097, which is the next in line after the gRPC port that like he was using, I will blocked up port. Eventually. When prompt tail starts, it will be connecting to our loci service running on our localhost 3100 and pushing data to it. It has one scrape config called system targeting itself. And it'll be reading all the log files, var log star, log a wildcard Control X. To save that, yes, enter ls, l h again. And we're going to see that there's a config from towel YAML as well. These files don't need execute permissions, only just wineries do. Okay, So this configuration fall I got from the official repository again. So we'll look at that. That's it there. So do take note if you're not using version 2.4.1, disk configuration for might be slightly differently or version. Okay, So now we'll configure prompt OWL is a service just like we did with loci. So there's create a specific user that we'll use to execute Ponto and I'll call it user prompt sudo, add system from tail. We can check that that prompt, her user exists by typing id from tail. And we go and UID 99 6 script by day is 96. And as part of the prompt tail group, we could also check the ID four loci that we created in the last video, you're lucky 99 seven codes create a service file for prom tail. So I'll copy that line and using nano again. And in that at the script, copied that script to clipboard. Right-click from tail service type, simple user prompt, male, XX dot user like we're being prompted, likes AMD 64, that's too far with just copied the loci repository and unzipped it's config file is user local bin config, prompt hail or ML. Excellent. We can save that control X, Y for yes, Enter. We can now start it. Tail, start and check its status. Active running, perfect. I use Control C to exit that status there. Okay, so the prompt Health Service is now started and it's now running and it's pushing data to login. But there is one problem. The prompt OWL user that I've created doesn't have access to read all the log files in var log folder. So I'll show this if we change our directory to cd var log and we do ls, l height h. We can see there's a whole lot of log falls in there. But if I just scroll up, we can see the user and a group of these log files is Sys Log ADM. So they're further down. One kernel, sys log ADM, sys log ADM versus log as well. Our prompt OWL user doesn't have permission to read those files, so we need to add our prompt, tell US to that group. So to do that user mode, Add to Group ADM Prompt. Right-click that. Okay. Now if we do IID prompt hail from tail is now part of the ADM group. That means the prompt OWL is now able to read the log balls in the server and push the information to loci. Okay, after doing that, we should re-start from tail case. That's taking quite a while to restart. Okay. So that took about a minute. It would have been scanning those log files. Okay. Now we're just double-check the status. Okay, so it's active running Control C out of that. Okay, so now go back into Grafana, go to the Explore tab and make sure like use your data source. You should see this term here, low browser. And when you click it, will see the available Lotvall. So we can query that Prometheus has put into place. So click one of those, you can click them to turn them on and off like that. So oldest look at job var logs and then Show logs. And that is now showing me all the logs that I'm getting from my syslog. There's a lot of information there right now you can look through all that, but we'll go through all this in the next video. And if you wanna look at the other one, we can turn off jobs like WordPress folder name. And they can look at, say, orthologs, for example, Show logs. And they can see who's talking on and on when. Excellent. So have a good look through that. In the next video, we'll look at log QL, which is the query language used by Loki to query log files. And that is a very simple log QL statement day. We'll look more into that next via Excellent. 23. LogQL: Okay, so now that we have a prompt L service, pushing data to our lucky service, and we've set up a lucky data source in Grafana and we can read it in D, Explore tab loci and shows log browser. We can continue. When you open loci. It already has some information from the loci data source about the kind of information that it's already collecting. Now the information that we're seeing here has come from our prompt L configuration. If we look at the prompt L configuration from the last video where we downloaded, installed the prompt oh, binary recurred this one, scraped conflict year, we named it system your targets, the local server. It has a job called var logs and it has a path variable, so it's scanning everything in the var log folder, star, log, a wildcard or anything. Look, so var logs there. We can see that here our job is via locks. This filename is also created, and that's because the scrape config path property here is showing us all the file names it's found that follow that pattern. Now I can see both of those options there because I've got them highlighted here so I can unhighlight it or highlight it again. Same thing with job. I can make it active or not. If I make job active and then select var logs. Shows me here the string here, job equals var logs. That string is called a log stream selector. That's one of the required things we need when we're doing a log DQL query and this window here. So very quickly we can just press Show logs and is put it into their job equals var logs is showing me everything in the job var logs in the last one hour. So if I just scroll up, these are all the log lines from all the files at that scrape config has collected. And we can look at each of those individually and open it up. And we can see it has two labels for name and job. So looking at job equals var logs, every one of these has job var logs. The file name will be different. For example, that's var log ortholog, where as that is var log syslog. Okay, so that's a log stream selector. So if we go back to log browser and de-select that press filename, press var log ortholog. That's also a log stream selected it, It's just a different one by Show logs. It's showing me all the logs. It has the label var name equals var log orthologs. So if I open those up, you'll see the label fullname equals var log ortholog. So all these lines here all have followed AIM via locals logo. And that's the log stream selector. Now we can do more with low stream selector. We can select to log streams at the same time. So I go back to the browser and all select auth and cis at the same time. So if we look at the log stream selector now, there are a few differences. There is a tilde there indicating it's using a regex pattern equals rejects, and that is a pipe character. So var log ortholog or var log syslog slits Show logs. It's now showing me all the lines from both of those files. So that one is var log syslog. I'll go further down by log syslog. And there's a var log or log of a celebrity. And before I go into my documentation, we've seen logs, frame selectors. Now inside the log stream selectors are operators, okay? And this is an example of using a regex. There are several of them here it is equals, which is the most common one. You'll see, for example, job equals var log. Say you are a not equals. Down here I've got filename not equals to var log. Syslog also rejects matches. So we've seen that we've searched for a regex by log ortholog or var log syslog and rejects does not match. So all these examples here, we can talk those by hand into the search query. For example, if I was to delete that and create a curly brace, it's given me the available labels that it knows about in the last one hour only have two labels, so there's no point meet changing that to 24 hours, for example, because I still only have to choose from. But if I just selected file name there, it didn't says equals. And then it shows me what values are search for and filename. That's everything that our prom tail scrape configures found and putting to the loci service for us. So 49 equals dB and package manager log. So I can run that query and then that's everything it has in the last 24 hours, we can say that something was stored, lips CBD, for example. Okay, so now let's say I wanted to search for all of those are two ways we have of doing that. Job equals var logs because just so happens I have one scrape config and a job name was via logs. Or I could say filename equals using a rejects, that's the tilde character there, dot plus, like that ended off with a curly brace or Shift Enter. And that will return all the file names. For example, var log syslog. Well look at that one, that's var log syslog. Most of the Thomas var log syslog. Sure, there's some orange onesie, but let's say I didn't want one of those comma, a rockfall name again, not equals to see slog for example. And then Shift Enter or press Run Query. Okay, so it's given me all the filenames, orthologs in there. And we'll find that Debian package manager logs as well. But won't find anything in there with the label folder name, var log, syslog, orthologs, orthologs, orthologs anyway, there is outlined here. So you can read more about the log stream select operators. Now, looking at filter expressions, filter expressions allow us to filter what's returned from the log strange selector even more so, let's go back to Job equals via logs. So I've got everything that it can find where the label job equals via logs. Job, well, logs, I can say I only want the jobs with the word error in them. So some of those we have era was quite hard to see. So let's level equals error there. So I can say Type II error. So everything in here now has the word error in it. Now for this filter expression here, it doesn't work to actually write a equals, equals like that. Or just a single equals needs to be applied equals the pipe is quite an ambiguous character to use, but that's just how it does because pipe is the same as writing or in a regex expression, but that's how you use that filter expression. If you want to include everything with error, then you use those two characters. They're in older versions of log q. Well, you could just do that, but that now shows an arrow. But the more recent versions that says give me everything with the word error in it. Okay? So we can say give me everything that doesn't have error in it. So for that we can use the NOT character, not equals to error. So everything returned doesn't have the word error in it somewhere. Now moving on, wicked, say we want to use a regex in our query. So in that case would use the pipe and then the rejects tilde. And we can say, give me everything with ERA or info. Now, the pipe is used twice here. They are in there in this expression. That means give me everything that matches the rejects. And in the regex that just means or so error or info. So we'll see everything with error or info. So Shift Enter and if I scroll down is error, error info, error info so we can find era or info in those results. Let's say I didn't want era or info. I could say not rejects era or info. Okay. So everything returned doesn't have era and it doesn't have info written anywhere in there. We can do more than that. Sometimes you might get a line with error and info in it. So you can say, give me everything with Era, but not if it contains info. So we're getting error there and none of those lines also contain info which is quite hard to find. And this was also anyway, but just showing you that it can be done. Also, we can do another one, invalid user rejects and valid user will find everything within valid user followed by Bob or radius. So there might be a few those press Enter, there's none of those. If our chicken, the last two days, there are two occurrences of that. There was invalid user followed by Bob and invalid user followed by is, I could just say give me everything with invalid user Shift Enter. And we can see all the attempts where someone's toward low into my server. In all the usernames, are they using their IP addresses? This is pretty normal for a server on the Internet, but also if it were logging web server logs, we could also search for status equals 43 or status equals 5 by 3, and that would be a rejects for that, if you know, rejects, rejects can become quite sophisticated and quite long. So we'll see some more rejects his later. Okay, Now we'll look at scalar vectors and series of Skylar vectors j. So the data returns so far are returned as streams of log lines. Okay, so log lawns, many, many log lines and we can look at them individually, how graphene and tries to break them up. But that's the law and as is written in a text file, we can't really grasp that despite the fact that Grafana in the Explore section is creating a graph of that. So for example, if I just put job equals var logs, it's drawn us a graph and its collared it in some way grouped into common information like info, era and unknown. If we look at the tool tip there, now, if we use the log visualization in the Grafana dashboards E1 show us a graph. It will just show us the log information there. So I'll quickly demonstrate that. So I'll copy that, go into Create Dashboard, add a new empty panel, select the logs option. Does there. Select loci and paste your QL query in there and just click out of it. So there are bonds. That's the information that we're seeing in the log panel. There's no graph there. But if you wanted to see a graph, we can do that. And that is by converting our log lines into scale are vectors or a series of Scala vectors. Okay, So what we do is we wrap our query into a function that somehow counts our data. So the first one, count over time, shows a total count of log lines for a time range. Okay, going back into the Explore tab, I'll discard that and I'll do that query again. Curly brace, job var logs over time bracket and the range one minute. Okay? So that has taken a logline and created two scalar vectors. So it's done a count of var logs where the filename label equals var log syslog. Also another one that bottom here with a full name label says var log ortholog. In the last one hour, there are only two log files being written. If I change that for last 24 hours, then there are four log files that it can create scalar vectors for. It's quite hard to say there is some blue dots down here. They would be for the droplet Agent Update Log. And there's a yellow one just over here, which I'll zoom into matte, top bar log Debian package manager logo just down there. I can zoom into that even more. Zoom out. And I said that's taking our data and log lines and converting it into a graph by doing a count on the information that it's getting back to the Log stream select doctest there. Now this range from it here, we need to put in, That's a range of how far back it should count every time it creates one of these Skylar's forests and graph. So for example, look at this value just here for var log syslog, it says 16 in the last one minute that there, that range, there are 16 lines in var log syslog at that point there, in the last one minute, there are 21 occurrences of a log line in var log syslog, and that's what the one-minute is there. I can say give me that for one hour shift enter. That, okay, so now the graph is a little different. Zoom out. We can see here that at this point here and the last one hour, there are 1321 lines in the var log syslog for I could even do 1 second. There's not many. Detroit 10 seconds. Here we go. So the last 10 seconds at this point here in the var log syslog, there are 11 occurrences. Go back to one minute. There are 21 occurrences and last one minute there. So that's what the range properties about when you use these functions that cover your log streams into scalar vectors, okay? Doesn't norm, right, is very similar to count over time, except it's showing us the right per second. So once again, if I was to that value there, there is a right of 0.667 log lines per second. And the last one minute at that point there. So that's how you read that. We can also do a bites overtime count, shift enter via zoom into that section there. That is because sometimes a logline might very, very long and contain a lot of bots. You might want to know something like that. And we can also get the rights of the bytes per second as well. When we're converting our query into a scalar vector, we could also limit it using a filter expression. For example, a copy of that Shift-Enter job equals via logs that can sign error. But we're counting over time with the right of one hour in that time range. So I'm going to change that to last 12 alias, loss and 24 else. So can I have a time via logs that contain the word error at this point here, for the last one hour, there were 360 entries, okay? Now aggregate functions, now what we've been looking at so far are series of vectors, scalars. So I'm getting two series here. So when doing the count over time job, because var logs for one hour, for example, is giving me new series broken up by filename there. So I'm doing job equals via logs. But because HMI log lines has two labels in it, della bring filename, it's giving me four different series. And you can see in the different colors, now, I can do a total of all of those, then become a single scalar vector by wrapping that into an aggregate functions such as sum. So sum is one of them. Shift Enter, it's now giving me the total of all of them. It's no longer broken that up into the different series. Pendant on the name of the label is now done. Some cat over time is now give me one. There are other ones as well. We can get the maximum of count over time or the minimum count over time. There are several of them. Their average standard deviation, standard variance account, which is count the number of elements. So count. Okay, So here there was 2323 over here there was for fall name series that contain log lines with labeled job dialogs. And then down here we've got autumn, Kay, and top-k. Now, these white convert all our series into a single scalar vector, like these other ones do above here. These will give us only two series or three series of the bottom values opinion on what we use the K or the top value dependent on what we use the case. So an example over here, there are four series returned because I can see that when I use count. So if I zoom into that, I only want to know where the values were highest. So I can say top-k to coma, and it's only going to show me the top two series now, even though there are four series to choose from, okay, so let's just show me the top two. I want to see what the bottom two series we're in that collection there. So bottom to show me the bottom two series. So at that point there were only two series that contain log lines being fallen. I'm var log syslog and filename ortholog. But at this point there are other phones being written. Does a Debian packages and the droplet agent update it. So that's the use of top-k and bottom k. I can say, give me the bottom three. We go now got bottom three or bottom four, which is just going to give me all of them anyway. Okay, so there are some examples there that you can use. And of course you can also filter those further down. Say give me everything with ERA or everything with info, etc. Now not only that, let's say L over time via logs, one minute, for example, returns for different series based on the labels in the information. So there are two labels here, job equals via logs and varname equals whatever the name of the file name was. We could have a third label, which I'll show you in the next video. And we can choose what we're going to group by. So for example, I'm saying sum, give me the sum of everything as one. I don't really want one anymore. I want to split them up again and I'll use by far name. So it's broken up again. So that's essentially the deck of the same response as what was returned by that. But I'm saying explicitly to group it by far name. Now, this is good in those cases where you have more than two labels in each of your logline, I only have two, so we're not really changing the query much. So it's quite a useless query, that one, but let's just say I had another label called host. For example, we could group by host. And not only that, we can group multiple log streams and cetera, you can be about documentation for examples, if you want those kinds of things. Now comparison operators is another thing we can do with the aggregate functions there. For example, show me the count over time of job var logs one minute where it's greater than four, press Enter. Okay, So that's given me a total being the sum where the value was greater than four. So none of those values are less than four, but we could have less than four if we wanted to. There's nothing there. What about less than 10? I think they're less than 20. There we go. There's a few less than 20. So same things, greater than, greater than or equals, not equals. There's a few examples, logical operators, and we can say, give me something where the numbers are greater than four or less than or equal to one. So I'm going to copy that, put that in there. Some kind of a time of job dialogues. One minute is greater than four or less than and equal to one, so everything's over 4 anyway. So in this example, we'll change that to 24 hours. Another example, valleys between one hundred and two hundred and copy that. There's not very many values between one hundred and two hundred. Anyway, operator order, That's the order of how operators are processed. That's common in computer programming. If you don't wrap your operations in brackets, it will default to a particular order example, PEMDAS is an acronym that you can use to understand that it'll process parenthesis first. If you don't have that, it will then move on to processing exponents, multiplications, divisions, additions, and subtractions in that order. Anyway, some examples there. We'll see that there are no parenthesis in there. So it's processing the exponent first and the multiplication and division and also the modulus and then the addition. And that's the same equation, but with parenthesis wrapped around everything. So there's a particular order. There we go. Okay. Well, you can read more about loci. Well, here, the official documentation, it's very versatile and you can do anything to it. And they'll become more useful when we create a dashboard later. Now in the next video, I'm going to set up prom tail service on another server down here, showing you that you can set up multiple prompt I'll services or pointing to the same loci service that grow fonder is pointing to in this lucky data source. And you can have as many of those as you want on all the servers, right? Excellent. 24. Install an External Promtail Service: Okay, so now we're going to install a second prompt OWL service. So we'll have to prompt tells running and we'll be able to query those in Grafana. Okay, so we have the lucky data source, we have the lucky service and we have the prompt L service all running on our grafana server. So in this one, I'm going to install a prompt service on my MySQL server that we set up in this section. And that will be pushing data to the loci service on the grafana server. And this is to demonstrate that you can have as many prompt EL, services running wherever you want or pushing to the same loci service and being able to query those in Grafana. But because they'll be running on different servers, there are quite a few considerations. So we'll start off by installing deep prompt tail binary on the MySQL server. I'm going to use pretty much the same process as was demonstrated in the install prompt OWL binary, and start as a service section. So I'm logging onto my MySQL server, okay, I'm going to install the same version. So CD user local bin, I'm going to install the same version that was 2.4.1. The prompt OWL links AMD 64, going to unzip it, inflating, it should already have permissions. I can check that ls l height h and prompt her lawn exam day 64 already has execute permissions. Yet accredit config, sudo nano, config Ponto YML. I'll paste this in. Now. Remember, we are actually changing my gRPC. Poor, you don't have to do this to 90, 97. This is so I can explicitly blocked that port later on using Xero will assign a dynamic port. Decline. Url will be my grafana server. So it's not pushing to a local logi services, pushing to a lucky service running across a network. And I've set up 12. My name graphene is because dotnet to point to the IP address of Mockup file server, it will send to port 3 100 loci API version one, push describe config is the same. We'll be targeting local host. We have a job called var logs and where reading or the log falls in the path var log, star log. Now when this data was posted, like you service, there is no indication that it's coming from a different server. So we can have another label. I'm going to press spaces when you're moving the cursor along because YAML files don't like tabs, my experience. So host, I'm going to call my local host and I'm going to name it MySQL. That's just the name of my hosts that I've written Control X to save that, yes, that's very good. Hey, I'm now going to configure it as a Service. I'm going to add the user, prompt her to run that process system from towel IID, from tail. I can check it. We go, it's in the prompt hail group. I'll create a file called prompt OWL service. In the EDC system, the system voter, Hey, I'll add this script. It's going to run that prompt, IMD, that configuration file, user local bin conflict pronto, draw x. Yes, we can now start to prompt mail service. Note that we will have some errors that we will resolve in. I'll just check the status just to see what it's saying right now. So it's active running, that's good. But we can see generally two areas here. It cannot read the log files, permission denied, and also error sending batch basically cannot send to port 31 a 100 mess server. That's because I've set up the IP tables rules and blocked port 3104 external requests. So we'll fix up the IP tables rules on my grafana server first to allow this MySQL server to push to 31 a 100. So Control C to get out of the status, I'm now going to go into my grafana server. Okay. So I want microphone a server DAU root of Grafana.com. I'm now going to verify by IP tables, rules, IP tables L, Okay, so these are my rules for port 31, a 100 here, these two here. So I'm accepting localhost 3100, but dropping everything else through 100, I'm going to insert a new rule here In 3123 that will allow my MySQL server to connect. Okay, so back on the prompt L service page, scrolling down this line here by just clear and paste the IP address to allow. So S there, meaning source. The source IP will be my MySQL server, which is that IP address there. Your IP address will be different destination port 3100 and accept. And here I'm inputting it into position three. So enter. Now I played tables L again, and we have a new rule here, accepting the IP address frequent a 100, still accepting localhost 3100 and dropping everything else through and a 100. Okay, So going back onto my MySQL server, if I did the status again, sudo service prompt, I'll status. We shouldn't be seeing that error anymore. And I can't actually see in that last few lines of log there, but at anyway, next problem is to solve the permission denied for the fall. We CD into the var log folder there. I'm just highlighting that and then if I right-click it, copies it down into the line ls, l, h, all the log files. So I don't want to read our ADM group. So let's add our prompt L user to the item group. Okay, There was on the prompt oh, page user mode add to the group ADM from tail. Now if we do IID from tail, we can see that the prompt L user is in the ADM group as well. Excellent, You should now be able to read as log file. So let's do a status again on from tail running on my MySQL machine status, or just move along sideways, okay, I need to restart sudo service from tail. Start Okay, status again, okay, not seeing any errors. Control C. Let's try that again. Okay, we're now got seeked happening so the log files and are being read. Okay, so that's what we now have a prompt OWL service running on my MySQL server pushing data to the loci service, run the microphone a server. That means we should now be able to go into Grafana and see it. Okay, so open Grafana, Explore tab log browser. We've got a new entry here for host. So let's just deselect those and look at host being MySQL. We can click that and we can show logs. And these are all the logs from my MySQL server. Now, going back to low browser here, now, if I look at job and press var logs, show logs, we're going to see some which come from our grafana server and some which come from one MySQL server. So there's a third label, their host, MySQL. I'm going to add a label to microphone or server as well, so that we can query one or the other more effectively. We'll both at the same time. So I'm on my grafana server. I'm going to edit my prompt, her config CD, use local bin ls, okay, There's my prompt 0 config and a sudo nano wire. And down here in the labels, I'll add a new label, six spaces, host colon. Okay, So control XES. Let's restart from tail. Okay, We can double-check its status. Very good. I don't see any errors. Okay, going back into the log browser, explore loci of browser host, by now have two hosts, Grafana and MySQL. So I can search for var logs on MySQL. So you host MySQL dialogues or via logs on host grafana or var logs for both servers at the same time. So sure logs, they'll go Grafana and MySQL or MySQL, same time, I can say, Just give me the host MySQL by pressing that plus there, that has updated D milk stream selector. Now going back to one of the more complicated queries from the last video, okay, going down to the aggregate Groups section. Down here, we can now group by host. So copy that and put that in there. So some count over time, job via logs, by host, press Shift Enter. Okay, So I have two counts there, will actually have three counts there because our original ones what actually tagged as Grafana. But if I just view those two there, that we can see that when you are getting two lines, this green one will eventually just disappear. So just to five minutes. Okay, Well, the colors changed. It's now called finer and MySQL. Excellent. Okay, so I'm happy that that's working. I can get data from prom tail on my MySQL server and view it in Grafana. And what's going on here is prompt OWL is sending that data unencrypted across the internet to my grafana server. Log files normally contained very sensitive data. They can contain the things that people type into a server. They can contain IP addresses, passwords, all kinds of things. So if you're running on a public network like I am, you need to make sure that information is encrypted as it's sent. Since I've already set up a domain name and SSL right at the beginning of the course and enabled that using the Engine X reverse proxy. I'm going to sit up Loki service behind the Edge next reverse proxy so that external prom tails can send data via that, that will have the SSL certificate bound. So any traffic will be encrypted when it's being sent to login. Also note that I'm using this method because both of those are effectively independent servers on the Internet. Normally servers in a corporate environment will be on a virtual private network. So the data will be sent through a private network anyway. But because these are both on the internet and you may have the situation, I'm going to show you how I solved it and that's using Engine X. Okay, So on the in-store per second palm towel service page, I'm going to edit my engine X configuration on one grafana server. So I'm on my grafana server that route. And Grafana, I'm going to open up my engineers configuration. So that was in the folder CD, ETC. And genetics sites enabled. So ls is the default, which is the default web page for Engine X. And there's that one that I created. So sudo nano grafana is because dotnet.com. There it is. I'm going to add another location in there. So adding a loin, few spaces, gonna copy just this section here. Don't copy the full stops. I'm just indicating that this law is before and after. So copy that and right-click, press Enter, I'm creating a new location, the loci path. So https.net slash loci and that I will allow my MySQL server whose IP address was pacing it there. It's annoying everything else. I'm a proxy pass to internal localhost 3100. Okay, so I'm using the existing SSL certificates that were managed by cert bot. So any requests to microphone and server via Loki from that IP address will be encrypted. Everything else will be denied, will be passed internally to the loci service running on localhost through and a 108 Control S to save. Yes and k, We can check that the Engine X configuration is okay, so Engine X hyphen T, and it says syntax is okay. Test is successful, Very good. It's restarting genetics. And genetics, very start and chiggers status. And that's all good active running Control C. Now, going back onto my MySQL server, I'm on my MySQL server now. I'm going to go back into my prompt 0 config CD, use local in LS There it is, Sudan and config prompt her YML. I'm no longer use that haste TTP 31 100. I'm going to now push to HTTPS refiner SB code.net slash loci, that was the path that I created and then use everything else which is the same loci API version one push plates. So SBIC o.net slash loci is the endpoint that I created loci API version one push is the remaining part of the URL that the Lockean point expects. Control X yes, restart from tail. Good, and or ticket status. Okay, Now I don't see any connection errors for that new URLs for a greater good, if you want to test that, you can access that from your MySQL survey can use curl and type in https slash slash slash loci for example. And it's returned a 301 redirect. That's right now. Also, I've only enabled that for my MySQL server. So if I just copy that and try to access that URL from the server where I'm making this video, https, go dotnet. It says 403 Forbidden. So the only server that connects us that is my MySQL server. So I no longer need that role. Port 3 100 for my MySQL service. I'm now going via the Internet reverse proxy, which is enforcing SSO. So I'm going to delete that import data before. So Beca, megafauna server, IP tables and L line numbers. It's shown me that if I were to scroll up the input 3, is that specific rule that I've added. I'm going to delete that role. So IP tables delete input 3. Now let's read that IP tables list again, and it's no longer there, only have two rules that 3100 localhost and anyway, so security of your data is a consideration when running services across servers that are managing log walls. If you were using AWS or similar, you won't be sitting up security groups to allow or deny access and may also be setting up encryption on those channels as well. I'm showing you a service which are just unrestricted Ubuntu's on the Internet and obvious Digital Ocean for that, digital ocean also has VPC configuration options which you can manage. Okay, so looking at the Networking tab on Digital Ocean and why particular service VPC. I have three servers here, my Amsterdam section, all on the same subnet. You. So I could have actually just connected using those internal IPs between my MySQL grafana service if I wanted to. But I'm just showing you if you didn't have that option, then you'd have to be making sure that all your messages as they traveled across the public network were encrypted and axis is control. Okay, Excellent. And also since I have prompt hail running on my MySQL server now, I should block access to port 900 800 from external requests just before I do that, one thing I haven't shown you yet about prompt OWL is that it has its own web user interface. So if I go to my MySQL server IP, which is that colon 1980, we have a prompt tell user interface. Now it's showing a lot of information or statistics there that you can look at and review. And that's the configuration. We only created that small section really in that configuration. But there are a lot of defaults that prompt L will use and you can manage all of those. But as you can see, that's exposed on the internet and I don't really want that. So I'm going to block a port 1980. So if I copy that whole section there, I'm getting SIP localhost 8080 and block everything else. I'm on my MySQL server where I've installed one new prompt. I'll just press Enter. Okay, I now have new rules for 1980. They are accepting localhost non-IT. That's okay, but dropping everything else. So that now means if I refresh that, that's just going to time out. That will take about 30 seconds, but just typed it in your browser into it when it's time out, it doesn't work anymore. Okay. So timed out. And that one. Okay, So time they're excellent. So that takes about 30 seconds to time out there by the time DO, right, okay, And since I also explicitly used my gRPC pause know 97 ongoing allow, block those as well. Okay, so copy the first line. Grpc, we'll sometimes call itself by its external IP address. So you may need to enable this one. Okay, I'm going to allow localhost, drop everything else and listed again. Okay, so I have some rules for 90, 97. So my first one, it has replaced my IP address of my host name. So that's okay. If you're having problems connecting to prompt OWL or loci internally on your own networks, it may be useful to do our understood where I added a role using my external IP and replaced it with the host name. Anyway, loci and prompt OWL are quite complicated to set up. I've done it many times now. So anyway, let's go back into Grafana and verify that everything still works. Loci, low browser hosts refine or MySQL query by via logs. I should get everything, Show logs for the last five minutes. Never go for finer and MySQL or MySQL to excellent. In the next video, we'll create a dashboard to starts using this data. And we'll add some complex functionality to that called annotation queries and how we can link the logs and the graph panels together. Excellent. 25. Annotation Queries Linking the Log and Graph Panels: Now we'll look at annotation queries and look at the log and the graph panels together. First of all, prepare some queries. So explore. And I'll create a dashboard that shows var logs for both of my hosts. Same time, MySQL and profile Show logs. Okay, so that's the logs that we'll see in our dashboard. Very simple query job, it goes via logs. I'm just going to save that for after job because of dialogues. The other one I want is to graph that. So I'll wrap that in a count over time for one minute and finish that off. And so now I can grab that. So all used is2 queries and my dashboard. And there we go. Now, too bright. And your dashboard, Create Dashboard and empty panel. The first one will be a logo. Down the bottom there. Loci query was job equals var logs. Clicker that and there it is. Oh, we'll just call it var logs. Okay, so apply that. I'll create another panel which will use the time series. So that's okay, loci. And it'll be that query there over time. One minute. So I'm seeing lots of my snare or my log files that are being written on both servers, both hosts on Grafana and MySQL, and reduce that down to one hour, for example, because that's pretty good. Logs again, Excellent. Okay, these descriptions, you are quite long, so I could actually make those shorter by using a sum and group BY options. Some of that host phone, liveness, click out of that. And those lines are now slightly shorter because it's not actually showing job vehicles via logs anymore. That's just an option you have. That's just one reason for using sum and the grouping down there. So I'm happy with that, apply that, I can just reorder this a little bit. So now if I change my time filter up there, whatever I see here, there, the related bogs down there. So I can zoom right into that and the related logs can zoom in further. And the other related logs from both service, I'll never get to one hour now to add an extra layer of querying called annotation queries over that. Now with a nice log lines here, there are occurrences of invalid user. I would like to have those highlighted on that graph. So while it's quite hard to actually see any here, there are likely to be some in there. So what I can do is create an annotation query that is executed over the dashboard up here. So Dashboard sittings, annotations, add annotation query. And I'm going to call it invalid users. It's going to use the loci datasource. It's enabled. The color will be red and my query will be job a cools via logs, pipe equals invalid user like that. Anybody's click out of that, that points. So I'll go back to my dashboard and just turn it on and off. We now start to see some highlights going on here, matching invalid user. So if I zoom in to them, there's a little arrow down here. If you hover over that, it shows you the actual logline that it found. I can see that again. And these are all different login attempts on my MySQL server mostly. So this is normal for a server on the Internet, automatic scripts will try and login to your servers. Okay, so straight away, that's pretty good networks already. I can see if I zoom out to one hour, there are a lot of attempts to log into my service going on. So you might be happy with that graph, but actually something about the sys log, log. If I zoom into these ones, for example, and I'll find one. Okay, so these lines here are foreigner loci line exam D level in folk cetera, query equals job via logs invalid user. So what's going on here is when you enter log, you are queries. They are all actually saved into the sys log as well. So you can see here that it says there var log syslog. So any query I create is also being logged, so they're also being matched. So for look at these again, most of those entries that we see, they're actually just me typing queries through lucky, well, the query guy looks valid user down here, query via logs, invalid user via zoom out. I actually want to see these kinds of queries where it says invalid user. So I need to refine my annotation query a little bit further. Let's zoom into these ones bit further. Because find something in there. Okay, I want to look at those ones bit further. If I look at that one there, query job via logs invalid user that line down the bottom there. Job var logs showing invalid user. I'll modify the filter to exclude something else returned in that line. So something that might be useful to exclude could be where it says level equals info there. So I'm going to exclude level equals info from the query. So going back into annotation settings there, annotations invalid user. I'll refine this filter to be not equals two level because info, so does click out of that so that it binds back to the dashboard. Now for zoom back out to one hour, I'm not seeing as many annotations it was before. So the annotations that I'm seeing now are going to be more explicit to the top I'm looking for they are the actual invalid user login attempts going back out to three hours, for example, and they go right back here. So within those are loaded annotations there, none of those are actually the low-Q or queries that are entered when I'm actually experimenting with the queries. So be aware of that, that any query you type into the Explore tab is also been logged into the sys log log on the server with respect roundness megafauna server. So excellent, I can save that. I can call that y bar logs table. And look at that over six hours. If I wanted to wait another 15 minutes. 26. Read Nginx Logs with Promtail and Loki: Okay, let's do something a little more advanced, but the prom tail will read age next logs and create a simple dashboard route engine X proxy that was installed at the beginning of the course. Here the reverse proxy, grafana with Engine X. So every request is going by that reverse proxy is being locked into a log file and we can read that using prompt OWL and Loki. Also in Loki will use what's called the patent parser. But we'll go into that first. We had to open up our scrape configs file conflict prompt O YAML on our server and add this extra section here. This is a second scrape config called Engine X is target is localhost, the job name is Engine X and the path is var log, engine X, star log. So going on to grafana server, okay, so I'm on my grafana server. We'll have a look at that folder where the logs are. So cd var logs Engine X, ls, LH, and there are the log files that engine X is saving and you can see that they're accessible. Why the ADM group, so I'll prompt our user, is already in the ADM group. But if you are using a specific user for prom tail, then make sure the user is in the ADM group so that it can read the logs. Okay, so let's now edit the prompter config file. So CD user local bin ls H, that was configured prompt OWL there, so we'll get it out. Config from tail. Why ML? Okay, this is my existing prompt L. Remember I've explicitly said that to 90, 97, you can leave it to 0 if you'd like. That's the URL that my local prompt tail is sending ascending to a local loci as existing scraped config, whose job name is Borlaug's, I added the host label. There are Grafana. Now I've positioned my cursor where I want to start pasting. Now going to just copy that part including the whitespace Control C or right-click and it pastes. Okay, so job name Engine X, steady conflicts targets, localhost labels and genetics. And that is the path to the log false that will read. So var log engine X star log, okay, save that control X, yes, and restart. Restart from tail and chickens status. It looks goods active running and I'm not seeing any problems. Okay, So we can now go into Grafana and open up, explore, and find a new entry here under Job Corps of Engineers. So click Engine X and that is the log stream selector, job equals Engine X and Show logs and we go and we can begin to see logs to prom tail is now pushing into loci. So we can see here the full name is var log engine next access log host Grafana, job engine X. There's another one, access local fauna engine X. And if we look at the details of the lines, so that was opposed to delegate service, that's the IP address of my MySQL server using a low-key push method. If remember, I set my palm towel on the MySQL server to go via the engine next reverse proxy, where it was using the domain name and SSL, that IP address there is my actual server that I'm using to make this video. And I'm making requests to the final user interface. Every time I press a button on the microphone user interface, anyway, there's a lot of information in these log lines here that we can query, but this is a good opportunity to learn a new feature in Loki, and that is the pattern parcel. The patent parser will allow us to take parts of those log lines and create labels from them. So for example, job equals Engine X will pass a patent over that. So pipe pattern, we're matching a string and putting matches into labels. So in this patent him according to new labels called method and status. If I look at the log line here, there's those two hyphens there, there's two hydrogens there. So we could be taking the first property, that IP address, a value or that one. But we're taking method and status. So those are that's the method would post and status is the number there are 204. So copy that line, put that into your query to pipe pattern. We're matching that pattern, that whole string and creating two new labels called method and status. And if it can mash that string and find values to put into method and status, it will create them as new labels for us. We'll see that. So Shift Enter. Now, if I look at one of these lines of now got two new labels here, method and status. So I can now start using those labels further in my query here. So for example, let's count all by time so we can credit graph, okay, So going to the beginning count over bracket to the end and we'll say for a range of one minute, and then we'll close that off with the bracket Shift Enter. We now start getting a graph or to zoom into that. There we go, we can start to see the different kinds of methods and status codes that are engineers. Reverse proxy is saying so status to a 100, forget method, status to a 100 and post method. There was a status 400 down there and a status 200 four. If I zoom into say, that section there, it looks a little more interesting. These are the kinds of status codes that we're seeing in edX, quite common. 200 means, okay, but you might get lots of 40 for errors, and that means file not found. You might get silver 500 areas, which means it's problem with the application running behind your web server. Reverse proxy. So I'll create a 404 error and out. So if I go to refine our dot ASP code.net and it is type in some junk that will return a 40, 40 page not found for a, for now, we'll see that in our engine next job now. Okay, so if I change the query to the last five minutes, I just zoom into that section there. There is a 40 for just the red line there. So that's the 40 for that I just generated at ten seconds ago. Excellent. So on a busy webserver, it's good to see what other status codes are because suddenly getting status 500s, It will stand out like a sore thumb. And if you see a sudden rise in fluorophores, your known as a problem as well. But there are many status codes and you can look those up on the Internet would be made anyway. So that's good. So now looking at the typical log lines that you get from an engine X server is a small sample here from earlier on, there are many values, so we can see there's IP address requesting that's called remote address is a time that's called time local. Here we don't have a remote user, but you might see that sometimes there is the method that's post, there is the quest, that's the path that was being requested from your web server. There was a protocol, HTTP 1.1. You'll see different versions of http being requested by move further along. That's the status code to have seen that byte sent 0 HTTP refer. We're not seeing that there. You may find that value, sometimes http user agent from tail. But when I'm using my browser, the http user Asian is usually something more complicated like that. Mozilla for Apple would kick cetera. So all of those values can be extracted by modifying our pattern. Okay, so here's an example where I create labels for remote address and time-like also copy that string and we'll replace the whole lot. And so if I look at the labels, it now says remote address and timeline. So I can make queries to refine on those two values of I needed to. What I'm gonna do is modify this one and add method status back so that you can see that we can use all those values if we want. Motorists on like old method was at that position, their method and status was at that position, their status Shift Enter. Okay, For look at the labels, I'm also seeing status and method again as well. So method post. Now it's not advisable to create variables for all of these things if you don't actually using them because it's just not good for performance. But anyway, I'm just showing you that it's possible. Also, you can change the names are labeled anything you like you if you prefer, remote address like that, for example, it now says For my address like that. So you've got the freedom. So the patent passes actually really good, and it's actually very fast as well. In the past you would use something like rejects in that position. But they say the patent parser is now the fastest way of doing this. So, and it looks pretty easy as well. So always a pattern parcel. Okay, so excellent. Now I'm just gonna get rid of remote address there. So I'm not going to use it. I'm not going to use TOM like or either, but I'm going to create a graph from that, but also group as well. So because then I'll use that in a dashboard. So going back to count over time bracket for one minute and close auf dem bracket, I'm going to sum that sum into us. So we're creating one line and then I'm going to group by status. So I've got a simple graph now that he's just showing status codes, I don't really care about the method, but if I did care, I could just put in a message like that. And I've got other labels down here saying the method and the status. I'm not going to use that. Also, another thing that I haven't shown you as well, you can change the order of this grouping clause by saying sum by status. And then that's the reminder of the query into, and that's the same result. So that's an option if you prefer written like that. Somebody status Daniel query that also works for somebody status message, somebody's status method job if you wanted to. But I'll do that now I'm going to use that in a dashboard. So copy that and let's create a new dashboard at an empty panel. Select loci, paste that query into apply and just save this very quickly cooling it Engine X. And we can just reduce that down to 15 minutes, for example. And there we go. I can add a log panel as well so that I can see the related log files. So let's add a panel. Time series are chosen logs, loci, curly bracket, job, engineer, Applause. Position that down there is looked at for the last five minutes. And I want to know something more about that. For example, I can zoom in or consuming Excellent and see the related log watts. Okay, so let's very quickly a basic engine X dashboard. I'm going to pause the video, create something a little more complicated. So anyway, I have gone and made an extra panel here which uses the bar gauge there just to create a summary of the remote addresses and how many times they're making a call to my web server. That's the query there. I'm using the bug age some can over time, job and genetics pattern remote address for the time range I'm using this dollar range instead of one minute, I'm using the range property there by remote address. That means that when I change the time here, the numbers will be more reflective of how many times in that period, the last five minutes. And we can see that one of these IP addresses is making a lot of requests to my server. So I could deny that IP address if I wanted to. But anyway, I'll save that, save, let's go back into the dashboard and not just reposition it. Like so. Anyway, this dashboard, Jason, here, I'll put on my book mutation so that you can copy and paste that is down here under sample ends next dashboard, so you can copy that whole lot to the clipboard, go to boards manage. I'll save that for a go. Import by Pentel Jason paste that. And there we go. That's the copy load. That name already exists, so I'm going to change it to something else. Import, okay, so I've got that loaded there. So we can see here straight away what's going on with my genetics reverse proxy anyway, just so that, you know, my grafana server is under a current DDOS. So I'm getting a lot of junk actually being sent to the server. We can see that down here. So if you're a fan of server is on the internet, there are possibilities you might start getting DDos if someone wants to read OSU. So I'm using digital ocean. Digital Ocean has an ebook firewall. So for example, under networking, under firewalls, you can create a Firewall, call it anything you like, set your inbound and outbound roles and you can apply it to a droplet. So for example, I can apply it to my Grafana droplet, but I've already done that. So if I just go backwards and modify my existing one, right now, I have all IP version four enabled for HTTPS. So I'm just going to edit that role. I'm going to delete that role and just have those two explicit IPs that are allowed to query hasty to be honest server, that's port 443. So I'll save that. Now, if I go back into Grafana, will start to see that these numbers will start dropping down. Phospholipids video for a moment. So all these extra IP addresses on the right here are all being blocked except for the to derive explicitly allowed in my older one of those is my server, but I'm creating this video from, and the other one is my MySQL server. Okay, so you can see now that the graph has gone down. So there we go. That's one of the things that happens with servers on the Internet and get DDos educationally. Excellent. So if I looked at for the last one minute, see what does this case. We can say that there are less remote addresses and eventually just be just a two, which are my two, I paste it. I've explicitly allowed. Excellent. 27. Install Prometheus Service and Data Source: Okay, so now we'll start looking at the Prometheus data source. And for that, we'll need to install our Prometheus service. I'm going to install that on my existing grafana server. You can install it on another server if you like, just to keep it simple or use the grafana server. Okay, so SSH onto your grafana server, okay, and you can run sudo APT install from ACS. It's much simpler to installed and prompt because the information exists in the APT cash already. So enter. Yes. Okay, so that's already set up a Prometheus service or us, or we can check that sudo service for a may see a status and it's already running. We didn't need to stop that. And if I just press right, we can see some of the latest logs. Okay, it's looking at the diagram what that has done, that APT install Prometheus has set up Prometheus service fourths plus also a node exporter. It has also created a user called Prometheus, so we can inspect those. Red Prometheus, its credit, a user called Prometheus would assign group Prometheus. We can see what processes that user is running by running ps ef and your Prometheus. And it's running too, processes, one called Prometheus and one called Prometheus Node. We can also see which ports those processes have opened up. So on new bond 220, we can use a program called SS. It's very similar to netstat reviews at an LTP grep from me, Thetis. And we go and it's telling me that Prometheus Node there is using 9100 and Prometheus is using 1990. It's word wrap, so it's quite hard to see that it's actually one line been word rep to there. Now, in the last video, I set up a firewall using my clip of auto digital ocean. I'm no longer going to use IP tables to manage firewall rules. I'm going to use the firewall using a digital ocean you are such as much easier end cloud provider will have something similar. Now if I look at the rule that I created for microphone or server, I named my raga fire as well. I have these ports open so I don't need to block 1990 or no 100-day are already blocked. But if I wanted to, I could open up another port for 1990 and allow all IP addresses. If I wanted to do, I'm not gonna do that. I'm going to leave it restricted. So there can only be called internally using microphone if you want to see whether the HTTP endpoint of Prometheus is running, you can tell I've curl HTTP 127, 0, 0, 1 colon 1990, enter into return to the HTML response saying, if you wanted to look at something more complicated, you could type slash metrics. And there we go. These are all the metrics that are being returned from the Prometheus endpoint. This is the same case for 9100 metrics. There are metrics being returned from the node exporter as well cases looking at both of these services running both of them have a metrics endpoint. You can call on both of those ports that were returned values. Now as you can see, setting up the Prometheus service with its own collector, which is the node exporter, is currently much symbols and sitting up loci with a prompt Health Service. Now both of them have their own query language. Prometheus is prom QL and Loki uses log QL also tell the difference between the two. Prometheus is about numbers and values, whereas loci is about reading log files where that contains strings. Also, the prompt OWL service is pushing data to the loci Service, where as Prometheus is requesting data from the node exporter. So you can see that's the direction of the arrow that I've drawn there. So there's some main differences there. Now we can see that all that is running, we can connect to that and Grafana by setting up a Prometheus data source. So let's do that now and Grafana, okay, I've logged on to megafauna. You are data sources. Add data source Prometheus, which is one of the top select http localhost 1990, or you could type http and 7 000 001 colon 1990. It's the same. Save and test and data sources working excellent data source is updated. We go into the Explore tab from ACS now exists as an option. We can select that now Prometheus as its own metrics browser. There is a whole lot of information there. Prometheus is quite a sophisticated monitoring tool and you can start node exporters are many servers all pointing to a Prometheus server. And we can read that through Grafana. Okay, so looking at the many options here, some of them start with node and others start with Prometheus. The ones starting with Prometheus is querying a coming from the Prometheus service, whereas the ones starting with node. So TCP in, for example, instance, 900100 years query are coming from the node exporter process. Normally you would be reading Prometheus stats from the node exporters. So our node exporter on another server later. But if you want to know information about the actual Prometheus service, you can also query that as well. Anyway, I recommend looking at the metrics browser and going through a lot of those and seeing what they do. But we'll install a proper dashboard for Prometheus next video, which will make a lot of that much easier for us because there's just a lot to choose from there. So, but one good one to check now is up, just the simple word up. And I'll select up for both 1990 and 9100 years query. And we can see that that is returned from both of the metrics endpoints 1990 and 9100, scribing the job and value one, which is the same as saying true. So both of those services are up and the graph is gonna be quite simple. But you can see that's prom Q0. It's very similar to low QL. It is different, but you create in much the same way, okay, so excellent. 28. Install Prometheus Dashboards: So provided we have the Prometheus data source set up and we can query Prometheus and we can query to different kinds of jobs such as node and Prometheus. We can continue. We'll install two dashboards in this video, one for the Prometheus service, which is that there. And that'll tell us about the performance of the actual Prometheus service and a dashboard for the node exporter and that is there. And the node exporter is useful for telling us about the performance of our server as a whole. It will have many statistics that will be very useful to watch over time. Now, we can install node exporters on many service and we'll do that in one of the next videos. Okay, So going to configuration data sources, prometheus in dashboards here, our first dashboard will be this one, prometheus to stats. We're going to import that. Don't install Prometheus status dashboard, that's for the older versions of Prometheus. Grafana metrics might actually work just yet, but we'll talk about that in the next video. But for now, that's all we need. Now if we click that, it takes us straight to a Prometheus dashboard. If we were to look at any of the queries behind these visualizations here, you'll see that the job is Prometheus. If we look at another one, job, Prometheus and job Prometheus, this scraped duration is actually grouping by job. There are two jobs, node and Prometheus. So already that is very good. We can say about the performance of the Prometheus service. And just a reminder, what Prometheus is doing. Prometheus is requesting data from as many node exporters that you can connect to it. If I had 20 servers, I can install node exporter on each dose 20 servers and set up Prometheus to query the node exporter on each of those service and would want to know how Prometheus was handling that. And that is a good graph to help you understand that, such as query durations and script durations. But anyway, next, we're going to create another dashboard for the node exporter. So go to dashboards, manage, got any changes if you want or save them. We're going to import another dashboard. So let's go and find the dashboard that we want looking at what documentation is a link here to the grafana.com, Grafana dashboards website. So open that. Okay, So 0 Grafana.com, Grafana dashboards in, if we use the filters here, will find a whole lot of dashboards that we want. But the particular dashboard that I want is 11074 and there's a link for it there. Okay, node exporter for Prometheus dashboard. It's updated recently and you can read about it there anyway, all I want is the ID, which we already have. Now put that into the import load. Okay, node exporter for Prometheus dashboard. Select the data source Prometheus and import. Okay, So this is a very impressive dashboard and this is telling us everything about our server and it's getting that information via the node exporter. So that's the node exporter there. I've installed it on my grafana server. The Prometheus service is requesting data from the node exporter and making it available for the Prometheus data source. And that's what we're seeing. So display the information here. Only have one node exporter configured in the Prometheus configuration. So I'm only seeing just at one for localized phenomena a 100. I can actually just make that a bit smaller since I only have one server right now. But really good information we can say about the CPU, disk space, Internet traffic, parallel memory, and some other graphs show the information slightly differently. Tcp ins and outs open file descriptors to as Aubrey, good information. If you want to look at the performance of your servers are hall. So you can install node exporters are many servers. And then once you've set-up, describe targets in Prometheus, you will have done here as new instances, okay, so the information that is being used in these visualizations is coming from the job node. So if we go to explore or discard any changes and go to metrics, browser, job, node. If I look down here, most of those metrics start with node. If I press Prometheus, most dose metrics start with Prometheus. There are duplicates, such as prefixed with Go and a few others. But those two dashboards, node exporter, focuses mainly on the node exporter and the Prometheus to stats focuses mainly on the Prometheus job. These things become very useful the more time your services running. And you can see, anyway, next video, we'll set up a dashboard for the grafana service, which is much like the Prometheus dashboard, but specific about the grafana application as a whole, Excellent. 29. Setup Grafana Metrics Prometheus Dashboard: Okay, so let's look at the grafana metrics from atheist dashboard now to make into Grafana configuration data sources from atheists dashboards, grafana metrics, their import that. Okay, so this is not going to work straight away. Let's look at it. Take lots of things missing. If we look at one of the visualizations, is looking for job equals Grafana. That's for another one, the job profile case. So we go to the Explore tab, discard, Prometheus metrics browser, look at job. We only have jobs for node and Prometheus. So that dashboard needs a job megafauna. Now the Prometheus service and the node exporter both provide these endpoints that exposed the metrics that each of these services are meant to expose or to show those. So on your grafana server you can view those. So curl http slash slash 127 000, 001 colon 1990 slash metrics enter. That is a lot of information in that response, but that is the information that you will find when you query the Prometheus job. For example, if I scroll up, you're something Prometheus, TSP storage block sports, I'll just highlight that so it copies it to the clipboard. Now if I taught that into there, we can see that as one of the things that we can see in Prometheus. And that's the value, that's a key space for value kind of setup. Okay, so there's one for everything now. So same thing for node. If I do a query on port 9100 slash metrics, we're getting all the keys and the values for the node exporter, or to scroll up, there's some node, node system d unit state, or do a search for that. And we go no system d unit state job that node. So it just so happens that the grafana server, when it's running, also exposes its own metrics endpoint, and that is at port 3000 slash metrics. There we go. So this is a whole lot of Grafana metric information that we can show in Prometheus will create a new script target prometheus, it will also query is combiner metrics endpoint. So we need to open up the Prometheus configuration file, and that is in the folder CD, ETC. Prometheus, ls, l height h, this is where Prometheus was installed, EDC Prometheus, that Prometheus YML is the configurations. So sudo nano from atheist YAML, if we scroll down into this scrape configs section here, and we'll just scroll further. There are two scrape configs with a job name, Prometheus and another one job name, no. So this is what we're seeing for the Prometheus job is getting its information from localhost 1990, and it's just doing a search for slash metrics. The node is getting its information from localized 90 and a 100 slash metrics. So what I'm gonna do is create another job name for Grafana called Grafana that will take its information from localhost 3000. So put your cursor at the bottom of that highlight, just that job name there. Make sure all the whitespaces selected and we can put the cursor up. I can just add an extra space in their job name for foreigner, roll down static configs at 3 thousand. Very good. Now you can delete those to comment lawns if you want. In nano, it is Control K and then Control K again, that deletes lines, but that's anyway, it's not the job name Grafana and the steady config is localhost 3000. That is the default port of Grafana when you first install it. And that is in the scrape configs section in Prometheus YML. So Control X to save, Yes, to save the buffer, press Enter, it's restart Prometheus. Double check its status. When I go to it's running Control C to get out of that. Now if I just refresh this whole screen, I'm just going to right-click and press Refresh. And then I press metrics, browser, have jobs selected. We can see there's a new job for Grafana. Okay, so if I go back into the Dashboard, now, dashboards manage Grafana metrics there. Okay, we're now starting to get data. So those queries are actually starting to work. I do a search for last five minutes, we're getting most information. Let's have a look what's wrong with HTTP status codes. The first thing, http requests total, and it's replaced that with this text. It will be on my documentation. We now get a graph. So two things have changed in the later version. That status code now has an underscore between the two words. And also that value is Grafana hated HTTP request duration seconds a sum. So we can apply that, never go. We have a new graph which is showing the requests per second and the different status codes. Now we look at this one here. Prometheus alerts, press eight, edit that. It's no longer alerts. So delete that and let's just put a bracket, open up and start typing alert and you'll get some things that it could bake. Now, I'm guessing that it's Grafana alerting, active alerts. Take click out of that and that looks right there. 0's firing right now. Okay, so apply that. Okay, so that works. Now, the last one, most use handlers, edit sought top k handler HTTP requests total. So once again, change that HTTP requests total to refine our HTTP requests duration. Second sum, there we go. We start to see a tabled in L, So we can also apply there. Ok, and then we'll go over some information coming through. Now this is a very basic graphene up metrics dashboard. If you search on official Grafana dashboards website, there may be other options to choose from. But anyway, let's just quickly what happens when you try to install a dashboard and Grafana, things I love to work because of the lighter versions. So you have to deploy bit of guesswork and you just get better at that we'll experience anyway, save that overwrought. Anyway. In the next video, we'll install Prometheus node exporter on our MySQL server. Excellent. 30. Install Second Prometheus Node Exporter: Okay, so now let's install a second node exporter on another server and read the information using the Prometheus service. This is the node exporter for Prometheus dashboard has been running for a few days now. And it's very impressive as full of information, but it's only looking at one instance. That's my localhost and I'm on a 100. What I'll do is configured the Prometheus service here to query our node exporter on another server. So in this video, we'll install node exporter on another server being the MySQL server, because we already have it from early on in the course. And then we'll see statistics server in this dashboard as well. Okay, So log on to your other server, your MySQL server is a good choice for that taste. So I've logged on, I'm now going to install the Prometheus node exporter. So I don't need to install Prometheus this time. It's just to note exporter component I want so copy that APT install Prometheus node exporter. Yes. Okay. We can check its status because it would have already started it. I go active running, Excellent. It's also created a user called Prometheus, so we can inspect that. So I'm just going to copy the whole lot. And there we go. Id Prometheus. Prometheus. Prometheus is using one service called Prometheus Node. And if I just press enter their Prometheus Node is listening on port 9100. And so right away, that service is accessible via port 9100 and myself. Okay, So that address, the IP address of your server, MySQL server was that I go metrics, the metrics coming from my server and my MySQL server doesn't have a dedicated firewall, but I do have IP tables installed on it and I'm using that to block the SQL port 33062 only microphone or server. So if you want to continue using IP tables, you can create some rules to only allow your grafana server or Prometheus server to access port and 100 and drop everything else. Or I'm going to set up the dedicated firewall in Digital Ocean this time. Okay, some of my digital ocean Networking tab firewalls and credit a dedicated firewall yet Digital Ocean from a MySQL server. So I'm gonna do that now, if you're using AWS, you would've had a security group for your server when you credit it, but it's a very similar process. This is actually much easier to use an IP table. So I recommend using a firewall that your cloud provider gives you to manage access to your service. So I want to create a new rule, custom for 9100. I want to only allow my Prometheus service access to that. And that is running on my grafana server. It could be running on its own server if you wanted to do by Petrus of microphone or server, is that so I'm going to allow just that IP address to query on port 9000, since I'm running MySQL on that server as well, I'm going to add another one as well. Mysql 3306 also only going to allow that IP address and to, okay, so 3306, okay, and for SSH or configure that also, I'm only going to allow the IP address of this actual server that I'm making this video from to access SSH port 22 and what is my IP? There we go. So I'm going to call this MySQL. That's my rule. And I'm going to apply it to my MySQL server. There. There we go and create the firewall. Okay, so there we go. I recommend using the firewall option provided by your cloud provider. But you could also use IP tables to restrict access to ports, a certain IP addresses as well if you wanted to use that method. Now continuing, I should no longer be able to access that port over the Internet. It's offered as refreshed, that one that will eventually timeout. But I should be able to access it from my grafana server where my Prometheus service is running. Okay, so I've logged onto my grafana server now, prometheus services running and I should be able to access that curl. That's the IP address of my MySQL server port 9100 slash metrics. And that's a response. There we go. My Prometheus service can access the node exporter on that server once I configure it. But I can see that the firewall that I've set up in my Cloud provider is working as expected. Okay, so now to go on to the Prometheus service and configure a new scraped target that will pull the metrics from that new node exporter. So on microphone AS or the word to Prometheus service is running. We're going to edit the Prometheus YML, okay, so sudo nano ADC, Prometheus, prometheus YAML. And if I scroll down to the scrape targets, scrap conflicts, there is Section scrape configs bit further. There's a job name called node. So we already have one target, their local host, normally a 100. We're going to add another target being this other server. Targets. That was the IP address colon 100, and finish that off, okay, so I have to study configs for the job node then that will read the metrics server as well. So Control X to save, yes, we need to restart Prometheus. Prometheus restart and we'll check its status and say, Hey, that looks good. Control C to get out of that and go back into this node exporter dashboard on Grafana for just refresh the screen. I now have another server showing up down here. So host name MySQL, so God, their host name automatically. And then there's statistics about it. So we'll start to see information about my MySQL server. It's selected there and I can filter by either server by pressing those. These node graphs are overall down here we can have the resources are finite because I've used 19% disk space on the server. What changed it to my MySQL server? I can see I've used 14% disk space and I'm gonna see internet traffic. There'll be more data as time goes on or use Localhost. I can see I have much more data for the local node exporter on that server case. That's what we have now, one Prometheus service. It's running a microphone, a server, because that was a good place to put it, but it could be on a science server if you need it. And I have two node exporters. Now, you can go and create as many node exporters as you like and just keep adding the targets in your scrape configs and a job name node. So you can have as many layers as you'd like. Okay, so also another consideration is Engine X. When I've added my other scrape, config, if I just look at it, again, I added the target as the IP address 9100. Now, I could have also set up a firewall rule for the VPC IP address. So if I go to VPC here and I'm looking at the servers in my Amsterdam VPC. Few members, I could have allowed these private IP address instead of the external IP address and configured it there as well. Now that's a better way to do it if you're lucky enough to have servers on the same VPC or network, but I'm just showing you how to do it across the Internet anyway, if you need to, in case your server is not the same Cloud provider, for example, it also another consideration is that if your servers are not on the same cloud provider and you can't set up a VBC, you should encrypt traffic as it's traveling across the Internet. For that, you could set up an engineer reverse proxy on that server, I can create a new domain name, for example, MySQL to ASP code.net and send that to that IP address of my MySQL server. I could then get an SSL certificate and instead set up an end next reverse proxy and not add a location called metrics proxy passing to localhost normal 100 metrics. Okay, so be aware of your servers are on the internet, data should be encrypted using an engine X proxy is a good way of doing that. Okay, excellent case. That's all we have a permuted data source with your service and to node exporters, you can have as many as you'd like it. Excellent. 31. Install InfluxDB Server and Data Source: Okay, so now we'll set up an influx DB Data Source. And for that we'll install influx dB version two. And that's what we'll set up in this video. The next video we'll set up a telegraph collector that will collect data using various plugins and store it in the influx DB database and then we can read that speakerphone. Yeah, Normally I would install in flux d be on the same server as megafauna server. And that was the case when I was using influx dB, version 1 and version 2 is a much bigger system now, so it warrants being on its own server in a way. So in this video, I'll set it up on a new server. So I'm using digital ocean. I'm going to create a new droplet. Who'd been to 20.04 basic, just a $6. This is good enough. I'm going to put in Amsterdam. I'm going to use my SSH key. You can create a password if you need. And I'm just going to call it in flux debate for example. You can call it anything you like, okay, great droplet. Okay, and as the IP address, it gave me copy that. Okay, so I'm going to quickly set up party. I'm going to call it in flux, going to set my key because I'm using an SSH key, I'm going to set the appearance so that the text is larger. Okay, back to session and then just press Save. And there it is. Now, open it up, take root. Okay, I've logged onto my new server that I'm going to use for influx dB and also the IP address is that now I'm also going to use the VPC network in this example. So the microphone is server and when you flex db servers will communicate across the private network. Okay, so let's next install in flux DB on that server, install commands you can find in this link. And down here, I'll be installing version 2.1.1. That is the latest version of the Tm of making this video. If there are new versions, when you watch this video, then you can try and install those. But all my examples and documentation will be for version 2.1.1. I'll also be selecting who've been to and Debian. So I'll be running those commands now that's a lot of commands there. So I've got those commands on my documentation already here. I recommend running those commands one line at a time. Okay, So the first line, so I'm just copying that. I'm on my server. It's going to right-click and pastes into. That's good. The next line, copy that, paste by right-clicking into the next line. So I'm again, I'm clicking that copy to clipboard link PE, right-click paste. Now sudo APT get update and so ABT get installed in flux DB2 enter. Yes. Okay, Now, normally it's not started by default, so I will just verify that doing the status in flux TB status since it's loaded, but it's not active. So let's start it. Control C, start and check status again. There we go. Active running Control-C. Now that will be running on that IP address, colon 886. Okay, so there's a fancy user interface, becomes part of in flux DB2. So it gets started, we need to create a user. So in a lot of ways it's very similar to Grafana. So I'm going to create a user called Grafana and the password, I'm just keeping it simple, but you can make yours very complicated. Willing to put in an organization. You can put anything you like. I'm just gonna put in ASP code and a default bucket and call that Telegraph like that because we'll install a telegraph collector and all the information will be stored into that bucket called Telegraph. Okay, Continue. I'm going to press Quickstart. Okay, then you'll see it's an application is similar to Grafana. It's about reading time series data and you can get that data from many places. So data there. So here are a whole bunch of instructions on how to get data from various places, but more of that later buckets, this is important. The Telegraph bucket is our default bucket that we just created, and we'll be reading from that in Grafana. And if you click it, we can start to see some measurements on it. Submit. And when you start to see some graphs, five minutes, okay, So what we're seeing here is just default properties about the telegraph bucket. In the next video, we'll continue to set up a telegraph collector and we'll have a lot more properties that we can query. Okay, before we continue functionality to the telegraph collector will set up the data source in Grafana so that we can at least start seeing some of the data in Grafana. So go into data. So first we need to make sure we have a scraper and we do have a scrape up is given us a default so we can call it anything we like. I'll call it my scraper, doesn't really matter. Just important, there is one scraper there and a displacement metrics URL. The API tokens, we have one token, Grafana token will need that Copy to Clipboard. Now, your token will be different. That's my one. Now, go into Grafana, go down to configuration data sources and we'll add a data source in flux dB. So select that. Okay, influx language, we're using influx DB2, so we need to select it lux, the address, I'll be using a VPC address of my influx DB server. So if I go into digital ocean networking, VPC, and if I look at my Amsterdam members in flux DB, its IP address is that. So I'll be connecting to my flux DB server using the internal IP. You could use the external IP or you could have installed in flux dB on the same server and you'd be using 127 zeros are 0, 1, or localhost, but I'm using 10 dot 133518086, okay, Server default. We're not using basic course organization. Sb code is the one I used. The token is what I just copied. So I'm pasting that in the token I got from discrete foreigners token page here. This is essentially a password that will be used. Default bucket doesn't matter what you write here, but I'm going to write telegraph, save and test, okay? There are reading and flux dB, okay? Three buckets found sometimes doesn't work. First go, Okay, now we can go to explore and do a simple test. So in flux DB run a sample query, show buckets, and we go telegraph tasks and monitoring dose to our system buckets we won't be using, those will be using the telegraph bucket. Consider telegraph bucket when you use explored in flux dB and you run the sample query, then we can continue. Okay, So I have connected to the VPC IP address and that was 10 dot one, 33 dot dot 85. Now I'm going to configure the firewall because that address, there is a public address. I'm now going to set up a firewall to restrict access to that to only my server that I'm using to make this video and also to the foreigners internal VPC address. So if I look at my grafana server, it has internal IP address as well, 10, 13, 3, 0, 3. I'm going to copy that and just put that in my diagram there. Okay, I'm now going to set up the firewall so only megafauna server can access the flux DB server on 800, 86 and also my computer that I'm creating this video on. Okay, So firewalls and my credit new firewall in flux DB firewall for example, port 22. I'm going to remove those and just use my IP address, which happens to be that. So I can SSH and a new rule for custom 800, 86. We're going to allow that IP address so that I can visit it in a browser, just like I'm doing here. 800, 86 from Madras. And also allow for foreigners VPC address, which was that one there. Enter. Okay, So I'm not going to change any outbound rules. I'm going to apply it to my influx DB, droplet, great firewall. Now, if you're using AWS or some other Cloud provider, you will have a similar process for creating firewalls on AWS, they called security groups. If you don't want to use firewall like that, I've created some IP tables for all such you can adapt. So you can accept from the particular IP address or domain on port 80, 86, you can have multiple except statements, drop everything else, and then just verify that it exists in the IP tables list. Okay, so going back to her Fanon, we should verify that firewall I set up still hasn't stopped our ability to query rigor finance, data sources and flux dB and just do a quick test ago, S3 buckets found. So going back to Explore tab should be at, around that central query again, and we can still see it. And these applications should still work in my browser. Scrapers, telegraph buckets, sources, excellent. In the next video, we'll set up the telegraph collector to start collecting statistics about the actual server that I've installed this influx dB onto. Excellent. 32. Install Telegraf and configure for InfluxDB: Okay, in this video, we'll set up a telegraph collector, which will collect stats about our operating system that I installed in flux dB onto and store them in, in flux dv. And we'll install those statistics into a bucket that we've named telegraph. So it's going to be installed locally on the same server. So it'll push to 127 000 001 colon 80, 86. Now I mean influx DB right now, if I look at the buckets there and I click telegraph, I can explore it. So there's a whole lot of information already there. We can already read these artists internal statistics being created from the Telegraph bucket. There was credited in influx DB, but the telegraph agent that are Install now we'll push a whole lot of extra statistics to that that will be able to use. But just to show that we can all ready read data in Grafana just from these available statistics. For example, Mm-hm. Stats allocated bytes. We can see there's a graph. We can actually view that data in Grafana right now without actually installing telegraph, because this data's already precedent in flux dB. So click the script editor update and that script we can just Copy, go into Grafana. And in the Explore tab on Grafana, select in flux DB paste that there. Make the timeframe little bit smaller, for example, and you'll begin to see data. So we're reading that in Grafana now. So everything we're seeing in flux day b, we can see in Grafana. And mostly the reason for that is because of the API token here that I'm using, Corker foreigner's token. Now, something which I should have done in a previous video, I'll fix now is XD create a read-only token, because if we look at the grafana token, it has a lot of permissions reading right? Authorizations, buckets, dashboards, org sources, many, many things. That's an admin user. We shouldn't be using Grafana to query in flux DB using an admin user. So what I'll do is create a new API token, a read write API token. And in fact, only I read token and there will be a foreigner read only, That's what I'll call it. Save that Grafana read-only, copy that to clipboard. Go back into Grafana data sources in flux dv down here where it says token, just reset that and replace it with a new token. Save and test. And now I can only see one bucket, and that is the telegraph bucket. So going back out into Explore, sample query show buckets, I can only see one telegraph bucket. So going forward, I'm going to use this read-only user in Grafana, okay, so I'm no longer going to use that token and that's the spirit of practice. Okay, so going back to explore, if I Script Editor copy that, that should still work. There we go. And there's some data. Just be sure that you can still do that with your read-only user. Anyway, if we look at those values that we can query, focus, switch back to query builder. There's quite a lot in there, but normally you want more. If you go to the Data tab and you look at the sources page, there's a whole lot of things in here that you can choose from. In this video, I'll install the telegraph collective process and set it up so that the queries data about CPU, disk, disk IO was well MEM, memory, net and few other things processes as well. There's millions of things to choose from, but I'm just going to show you a very quick way to get all those things in. First thing though, is to install the telegraph process, okay, So we can install that from a Downloads page. So open that up. This is the same page from the last video. Instead I'm scrolling down further, telegraph open source data collector. I'm installing telegraph version 1.20 for ubuntu Debian. There's a few choices there, and they are the instructions. I've copied these onto my documentation as well. So first-line, log onto your influx DB server, paste that in. Okay, that's downloaded now to install it using the Debian package manager. And so okay, sudo service telegraph status, okay, so it's active running. Now. It's got an error. 401 unauthorized will solve that problem now. So just control C to get out of that, The Telegraph process needs permission to push into the influx DB database. So what we can do is create a specific token for that purpose. But instead, I'll show you another way of doing this. If we're going to telegraph here and we select Create Configuration, will get this wizard. And I'm just going to click System there and continue. I'm going to call it system. And it's gonna create a dashboard influx day be querying these properties. So create and verify. Okay, so it's just credit a token for us which we'll use in the moment, but we can ignore this page, press Finish. Good API tokens. The new token is down here. So right, telegraph bucket, read system Telegraph Company. So it has the required permissions. So we'll use that token there. But before then, let's modify the configuration for telegraph back on the server cd into the new telegraph folder. So ATC, telegraph, we do ls. There are three files in there. Telegraph conf is the one we want. I'm going to create a backup of that file. So CP, telegraph dot conf to telegraph.com dot Beck. And that's just the backup that you can refer to later on if you want to LSL h because for false, now delete telegraph, the telegraph dot conf. Okay, we're going to create a new one, sudo nano telegraph dot conf, disliked dot enter on my documentation, I have a default configuration. Right-click is setting up just the minimum we need. It's quite an AP ports plugging in flux dB version 2, we're pushing to the local 800 86, the token. Delete that and put this in, copy that. Now if you just right-click, it will paste it in. Don't worry about the word wrapping like that. That's just what nano does when the line is too long to fit with that organization. And were when I created my first login, I see credit an organization as well and I called that SB code. And you can verify that using the application here on your user. It says this because that's my organization. Yours will be different. Bucket telegraph. Okay, so we have our inputs for CPU, disk, disk I, O mem, net processes swap and system control. X. Yes. Now let's restart telegraph. Already start. Give it a moment and then check its status. Okay? And if we move along, there are no errors and go into the flux again, go to explore. And if I just zoom up there, we start to see some extra properties. Mm, net processes, swap system. So if I type in process, says for example, I start to see a whole lot of extra things there. Another one, Let's try CPU. Cpu, de-select that usage user as a good one. Submit that and we can see the CPU usage user. Now this same query script editor. We can copy that and put that into Grafana, click out of that. And now I'm seeing that same information in graphene. We can start creating dashboards with that. So the last five minutes. Now, I'm not going to show you how to create a dashboard In this video. We'll do that in the next video. What I'm going to do in the next video is rebuild this system dashboard. Now this system dashboard was just created when I before, when the data press telegraph and did Cray configuration and show system. So boards system. So we'll create that in the next video. Okay, Excellent. 33. Create A Dashboard For Linux System Metrics: Okay, So in this video, we'll create a Linux system dashboard. So provided you have this system dashboard running in influx, then we will reproduce this and go far. Okay, So this dashboard works because we set up telegraph to collect information about these inputs plus a few others anyway. So this dashboard actually looks quite good in flux dB. So let's get this integral fauna. We can look at the query behind this petal, for example, by clicking the gear icon and pressing configure. Okay, so in influx dB, that is a single-step and this is the flux query. So we'll copy that integral farms. So in Grafana, dashboards manage new dashboard, add an empty panel. Select Stat, okay, Select in a flux dB and paste the query into there. Now, the difference here is that we need to replace this V bucket with Telegraph. That's the name of our bucket that we'd set up. So penalty paid, the penalty total here is system uptime and options system at the time. Okay. It also says 10 days. So standard options, I can just write days into there. So custom unit days, 10 days, and then resize it to roughly the same size. So I can close it to get out of that. Now that's the same for all of these going across here. It's a similar process. The next one we can recreate these memory usage here. So let's recreate that in Grafana. So Configure, okay, it's a graph plus single-step in influx going to copy that flux query, go into Grafana at a panel. Once again, it's just going to be a stat because a stat in Grafana has the graph anyway. So in flux DB paste, remember to change that to telegraph. And here they're using a percent sign. So standard unit percent. We also title at memory usage. So Panel Options, memory usage. Okay, Apply. And move it around to roughly the same position as look at another one. System load, for example, these remaining ones are all simple graphs. So system load is a good example. Configure, copy the flux query into Grafana empty panel, leave it as time series in flux, paste, telegraph. The title is system load and deploy. And if I'd credit all those squares that would actually sit down there. So, but now I think you get the point. So you can continue and it can reproduce that. What I've done is already done that and online documentation and the influx DB section, I've credited the full dashboard Jason, so you can just copy that. So either save or discard this one. So dashboards manage, I'm a discard or an import paste adjacent into there and load, I've called it influx DB system. You can call it anything you like, import. And that's the dashboard there. Lot of panels, credit, Excellent, So you can improve on that any way you like, reposition it out even more queries or delete some. Okay, Excellent, So that's a good introduction to getting in flux DB data into Grafana. This is not a course on influx DB. I don't create new flux DBA course, but I've shown you enough to help you make your first few steps into blocks dv. If you want some more examples of using influx DB settings down their templates. There is the community templates GitHub repository, and there are a lot of examples there. And you can follow the instructions in each of those and put them into flux dB and also use that information in Grafana. If you need, the steps won't always be easy. You have to read the documentation for each of these different templates to make sense of it. But anyway, in the next few lessons, we'll do some more influx dB and I'll set up SNMP demons on different servers. And we'll create a dashboard in Grafana where we can analyze the status of those SNMP devices. Excellent. 34. Install SNMP Agent and Configure Telegraf: Okay, Excellent. So far we have influx dB version to install. We have a telegraph collector collecting stats and sending it to influx dB. And we can visualize that through the complexity of the data source in Grafana. Now, like how we've set up other data sources in the past, you can have one mine service such as Prometheus or loci and have many collectors or prompt OWL, for example, for Prometheus, It's node exporters and there are many other kinds of exporters for Prometheus. Not just not exporters or the refine our metrics endpoint or any other endpoints that Prometheus can read. For influx dB, we can have multiple telegraphs. So I can set up telegraph on my MySQL server and pointed to either the VPC IP address, so influx DB or the external address. Or I can have telegraphs on all the servers that I got access to, all stunning to influx dB. Now, I'm not going to show you how to do that in this video. If you do want to ever do that, It's not that hard. You just copy the instructions that were used for the first telegraph that we installed. And you would configure the URL that you're sending the data to in your telegraph configuration there and whichever inputs you want to collect. So in this video, I'll be introducing you to SNMP d, Okay, so there's no need for that. You can do that if you want to in your own time. But in this video, I'll introduce you to SNMP. Snmp, Simple Network Management Protocol is a protocol that's been around since before the Internet went public. So it's very old and there's a lot of devices that still support it. Now, it's useful in those cases where you can't install a service, such as routers or switches or printers, or even servers and workstations can support SNMP. You'll find a lot of hardware in corporate environments where you can't actually install telegraph electron 2 or we can't install prompt towel onto it. It's not an operating system, we can do that. So those devices may provide data through an SNMP interface. Now, also, since you probably don't have an SNMP device on your own network that you can use. I'll show you how to install an SNMP daemon on newborn 220 points 0, 4, so that we can at least have some experience with it. Okay, so I'm going to install the SNMP daemon on the same server as my influx DB server. So login to your influx DB server and we'll install the SNMP daemon on it because someone might influx DB server. So install SNMP, SNMP D, and the SNMP mips Downloader because we'll use all those. Right-click to pace and enter. Then yes, now I'm doing this just so that we can show SNMP because you're very unlikely to have an actual SNMP devices on your network. So this way of doing it is more reproducible while you're learning. So it's normally started when we've installed it so we can check it. Status active running, Excellent, Okay, Now we can do a simple test query. So copy that. It's an MP walk version to see the community is public 127 zeros or ones. So that's this computer.me. Give me everything and we go and we got a lot of data returned. Now, it's important to note that the data returned is showing our ID numbers. So this is one way it's shown OISE overdose numbers equals a value. So as we see here, ISO equals data type in a value. Sometimes that why not say eyes or they might say number one doesn't really matter. What's important is that when we continue to set up the telegraph collector for SNMP, it's going to be doing search on what's called mips. Mips stands for management information base, but maybes are the form of a string like that. And when we did a telegraph configuration, it will be looking up the string version. So same with this one and if table as well. So it's important to make sure mips work on your SNMP daemon. So one way we can do that is to go into a file called SNMP dot conf. There now does SNMP.com. So copy that. Sudo nano ADC, SNMP, SNMP dot conf, Enter. Now scroll down and we'll comment out that line. So it's now hash mips does commented out Troy extra save. Yes. Now if we do that same query again version to see, see public 127 0, 0, 1 dot, which means everything, each line showing the MIB description no longer the numbers. So when we set up telegraph L configuration will be searching for me, for example. So you can see that down here. So we're looking for if maybe you've description. So we can actually even search one of those directly. So I just copy that and we'll see if that works. Okay, so doing that query again, I'll get rid of that dot at the end. And if I just put in that whole query, press Enter and it returns what I'm looking for. Now, you can also just get rid of the RFC A123 bit there at the beginning. Does even do that in a society. That's just a little bit of information about SNMP for you there SNMP is a very large subject, but I'm just showing you the beginnings to help get started. Okay, now let's open up our telegraph configuration like that. And if we scroll down, look at the input section. So we have quite a few inputs already from the last few videos. Another one for SNMP. So copy what I've got there. So we'll see isn't that icon to copy that whole lot right-click. And it's just a minimal SNMP import configuration there. That's the address that we checking, 127, 0, 0, 1, column 1, 6, 1, and it uses the UDP protocol. Okay, so that's good. That's gonna do some simple queries of the SNMP demon that we just installed on our local server here, the flux DB server. And we'll be able to see that information first influx dB and then in Grafana. So Control X to save that, yes. Okay, So now to restart the telegraph service, so sudo service, telegraph restart. Status. And it looks pretty good so far. I'll sing it. Any errors. I'll see. Now, let's go into influx and see what we've got. So in your influx db, go to explore. And if we look at this measurements section, we should start to see some SNMP. There isn't much there at the moment showing up as just the uptime MIB. So submit that. So it's showing up time here. Well, look at the query there. That's c sub two times coming from is named as up-time. Also in there we have agent, host, host, and source. Source here has come up with a tag, is tag equals true. So we can also use that same as the host, really the table if you like. But anyway, we're not seeing any information yet for if table or if description. And the reason for that is because by default, when we install the SNMP DNA B12, the information SNMP day where we turn is quite restricted. It's restricted to just 0 IDs with that prefix or that prefix. What we'll need to do is change it so that it returns a larger range of 0 IDs. So very quickly, let's just try and search for different maybe if table, so we could verify It's not returning anything by swapping that SNMP table command version to see public to local IP if title, and it says no entries. So let's change the label OID prefix in the SNMP Damon configurations. So this time it's SNMP d dot conf Enter. And if we scroll down, and we can see down here where describes the view, we can comment out that line because it's not going to be necessary and just remove this last dot one. So Control X. Yes, Enter. Now to re-start the SNMP demon pseudo Service, SNMP decreased into an L will tell that query again table and we get some data. Okay? So that means after a little moment to the graph should have caught up and we'll start to see some more information and interface in flux dB. And we start to see all of those values being returned through If table there. So if I just get rid of that, we got if description and if index, we will get if description. We can look at each Ethernet device that our server has bound. So I'll just get rid of that submit. Okay, so we can see that the table query is now working inside the telegraph collector and the information's going to influx debate. So if we just copy that script, we can put that into Grafana. So explore in flux DB paste, click out of that unsupported input type main, doesn't matter. Let's just get rid of that one for now. On query and Debbie go, we start to see all the information coming through. I will use this information in next few videos. So create a dashboard for SNMP inside graphene. Okay, So you should now be seeing that SNMP information inside grafana or the influx DB data source. Now know how this has been set up. The S and P D is serving data that has been requested from telegraph. So the telegraph agent is periodically requesting data from it's an APD there. We can set up telegraph to query multiple SNMP demons on different servers. So in the next video, I'll set up another isn't NPD or my MySQL and grafana server so that we can see SNMP data from multiple servers for the one telegraph. Because then we'll create a simple dashboard and Grafana or we can view all three servers. Excellent. 35. Add Multiple SNMP Devices to Telegraf: Okay, So in this video, I'll add another SNMP agent is 1 will be external. When you connect to an external SNMP agent, doesn't matter whether you're using Telegraph warning system. There are a few extra considerations about connectivity. So I'll solve those in this video. So I'm going to log onto my MySQL server, and that's where I'm going to set up another SNMP daemon, Okay, So I'm on my MySQL server and now I'm going to install SNMP D. Now I'm only installing SNMP d, not the SNMP, SNMP mips loaded from the other lesson. That's because I didn't also need to enable mips on this other SNMP daemon, the mips translation from OID is two descriptions happens on the server that is actually making the request. The server that is running SNMP walk or SNMP table or any of those other SMTP commands. So I only need SNMP D because this will return the war SNMP data and the translation to mips will happen on the telegraph in flux DB server where mips has already been installed. Okay, I now need to configure the SNMP D configuration. So here's an NP D.com. Okay, so go downwards until we find the agent addressed here. So comment that out and we'll replace it with this here, UDP call and 161. Now by default, the SNMP D that I installed already been to only binds to the local 127 000 001 address on IPV4 and IPV6. I'm not using IPV6, so it doesn't really matter at all about any IP V6 settings. I haven't is Server Agent address. It's now just bonding UDP 16 one. This will bind to 0000, 0000. So always net devices band on this actual server will be able to get poor 16 one, but we'll manage the firewall. Okay, I'm also going to increase the range of OID prefixes of the server will return, okay, so going down, okay, so the server will now return anything with that prefix, which also happens to include that. So you can actually comment that line out. Now, remember the F table information is actually prefixed with that OID. So in order to allow server to return that OID plus D one plus everything else, we can just do. Okay, So Control X to save yes, Enter and we can restart the SNMP d service. Also, if you don't have a firewall, is some IP tables information there that you can refer to. I'm going to use a firewall anyway, so SNMP D restart. Okay, so now we can go back onto our influx DB server and we'll try and do a request of the SNMP daemon running on my MySQL server. Okay, so I'm on my influx DB server. So I'm just going to copy that command with that in amines can replace that section with the VPC IP address of my MySQL server. So that's the virtual private network address. If I look at Digital Ocean, that IP address is 10, 13, 300, four, paste it there. This is not going to work straight away. Okay, That's because on that actual server, my MySQL server, either firewall setup. So if I go to my firewall settings, MySQL year, I will create a new rule, custom UDP 16 one. I'm going to remove those rules and add the IP address of my influx dB. That's where I'm making the request from a copy that so placed that enter, OK, Save. All right, So UDP requests to put 16 one from that server should now be allowed. So back on my influx DB server and run that again, I get a response. And we can see that despite the fact that I didn't install SNMP or the lighter on my MySQL server, I'm still getting mips descriptions. That's because the translation happens on the server. It's not that important to know that it doesn't matter if you don't understand. I'm actually talking about there is just if you do a lot of work with this and MP, it's good to know what's necessary and what isn't. Now, I'm going to go into the telegraph configuration and add the agent information for my MySQL SNMP d. So, so clear, sue though, nano, ETC, telegraph the telegraph dot conf, and so, okay, so scroll down to get to the inputs SNMP section here. And agents will add another agent to Paul. So UDP slash slash IP address of my MySQL server. I'm using d internal private IP. So Gen 1, 3, 3, 0 folder. And do that, okay, So the imports SNMP is now making requests to two different SNMP demons on different servers, one local and one across the network. And for all x yes, sudo service, telegraph, restart, Enter and check the status for any errors. I don't see anything. Control C to go to that. Now in, in flux db, go to the Explore tab and we should start to see some SNMP data for the other server. So SNMP, there we go. It's uptime and it's showing me both IP addresses. That's my MySQL server, and that's the local. If I submit now, we can see two lines being drawn there. So you're able to source their MySQL and influx db. So the host name is already coming through and that's in the source property there. So if I just move that sideways little bit and then select source. We said the names of our service and also due that day source. So I'm gonna look at MySQL or both. There we go. Excellent. So that means we can read that in profiler also. So Script Editor, copy that. Okay, Explore tab in flux DB paste. There we go. I'll just zoom into that section. There we go. We can see the data for both my servers coming through. And you could look at the table data, we can see that there's a new row. Anyway, in the next videos, we'll create a dashboard to read this Grafana. I'm going to do the same process with my grafana server that I just did with the MySQL so that I'm getting data from S3 SNMP demons. So it's exactly the same process, but just for different server, different IP address. That IP address will be microphone us about 10, 13, 3, 0, 3, Excellent. 36. Import an SNMP Dashboard for InfluxDB and Telegraf: Excellent. So we have this, we have Telegraph running on the influx DB server, and it's connecting to three different SNMP demons, one local, one on my grafana server, which is 10, 13 threes or a 31 of my MySQL. 10, 13, 3, 0 for the telegraph is requesting data from HDFS and MPD is using UDP on port 161. So I've credit firewall rules and each of those servers to allow the IP address of this influx DB server because that's where it telegraph is running to connect to those SNMP demons and request. Okay, so that works. So just to prove that I'm on my influx DB server, I can run an SNMP Walk command, SNMP walk version to public on my local computer, and I'll get a response. I can also run that for the grafana server there, 1013303. And it gets a response. Also the sign for 1003304. And as a response, now each of the SNMP daemons on those servers was configured to return 0 IDs starting at that prefix. That's much more data then will be returned if I just had those two configured because that also includes that and that plus a lot more. And also when Telegraph configuration is holing each of those three SNMP daemons, you can also verify that in flux dB by going to the Explore tab in your influx DB, select interface, search for source. And I can see my three service. Excellent said now means I can import a pre-built dashboard. So on my documentation important have dashboard, field flux, dB, and telegraph is here. This fault Jason here, copy that. Let's copy to clipboard, go into Grafana. Dashboards, manage, Import, paste that into there, and press Load. It's called SNMP interfaces import. Okay, now I'm going to set that down too, last five minutes, okay, So we have some things already set up for us. So up here in flux DB, MySQL, Grafana, it found those automatically. That's using a dashboard variable. We'll look into that in future videos. There is a table here, E to edit. You can use these queries as reference if you want to create your own dashboards from different influx DB data sources. The important thing here is this map line here. If I just delete that, then the headings don't look so, right. So if I just put that back, then the column headings much better. These are the ones I'm looking for in octets here for each device. If I press E to edit that, it's not the one I'm using map again down here. So if I just delete that, then all the series names are quite long and hard to read. So I'm using the map function down here to make the series names much smaller. It's also using what's called a derivative of non-negative. And that is making the graph shows differences between each timestamp. So if I took that away in, zoomed out six hours, every 12 hours, it's just forever increasing. So that's what the non-negative derivatives doing, showing us the difference this term and put it back to five minutes and apply. Then you can see that I'm just looking at different interface properties on these graphs. Uptime, you can see each of my different services have a different time. A, I'm doing mathematical equation on the result do using the map function. So if I took that away, shows that isn't correct. Let's split up. So use those queries and the queries from the other influx DB dashboard as well. And we imported earlier, so that was in flux DB system there. As reference. Note though, that influx DB2, since it uses the flux query language, is much more complicated than the old influx QL query language. If you look on dashboards on Grafana, you'll notice there aren't very many dashboards for influx DB2. I think a lot of people struggle with in flux DB2. Be aware of that. It's not easy. You might have to find a specialized course in influx DB2 if you want to continue with it. And also another thing too, I found it quite fragile. If you do queries for six hours or more, flux DB server starts to slow down very, very quickly. Not always, but occasionally. Sometimes you have to go on to the server and restart the service. So keep your influx DB queries, shorter time spans while you're sitting them up or experimenting. So just note that in flux DB one wasn't such a fancy system, didn't come with an inbuilt fancy user interface like this. And the query language was much simpler, but as you can see, it's come a long way. For example, data here, we got a lot of choices. But just be aware that in flux dB is quite complicated in itself. So you may need to do a specialized course in it anyway. So there's plenty of information for you to use as reference if you continue with the influx dB. So excellent. 37. Create and Configure a Zabbix Data Source: Okay, So the next few videos, we'll be about that as omics data source. Now these next few videos, we'll only be useful for you if you already have a ZAB each server. I'm not going to show you how to install SQL Server in this course because there are so many steps involved. But cervix is a full monitoring system that is similar in many ways to other systems I've shown, such as Prometheus and influx or setting it up. It's quite a different process for a lot of things to think about. So I'm not gonna demonstrate that, but I do have a course that specializes in cervix if you're interested. But anyway, if you do have as OBCs serve any one and visualize that data and Grafana, this is our big data source plugin and it's very good. So the first thing, we're gonna go back into Grafana and set up the cervix data source in Grafana. If you go to data sources and select Add Data Source, ZAB x doesn't appear in this list. What we can do is install a plugin that will allow us to set up as having status sources. So in here, plugins and we'll start typing cervix and you get the option there is f x k, So that's a simple dashboard. There's some instructions. Just press Install, installed, cervix. It's still not available in the data sources section just yet, we need to do is go to config here and select Enable. Okay, that refreshes. So that's now enabled. If you go to configuration data sources and select Add Data Source, it should be there. Right at the bottom. There it is, cervix. So select that a Data Source Edit. Ok, so now we'll continue setting it up. Now I have xy weeks already installed, so I have a ZAB URL note here that it's going to call API underscore JSON, RPC dot PHP. So you want your cervix URL and you'll be calling that PHP script. So I've already prepared that. So that's my address, cervix Schumer z.com slash cervix slash API, JSON, RPC, PHP. So server default, that's the best. Leave everything else as default. Zab API details. We need to create a specific user in cervix that can read data through the API. So go into cervix. I'm in my cervix year in administration uses here, create a user. I'm going to call it grafana, the groups, no access to the front end. It's just going to be an API user. It doesn't need to login to the front end or the user interface select, put in your password. I'm keeping it simple that you can make it complicated. Everything else is good. Let's go to Permissions. The role is a user. Role doesn't need superuser permissions or anything like that. Doesn't need to be able to access the data behind all these different options in the cervix. So here it says access to API enabled. I don't need to set up any media types, user add, user added. We can now go into Grafana and try that out. Okay, So username and my password was very simple. So it's advised to use trends that will make the responses from the API smaller, and it will start using trends after seven days. But you can change these defaults here, but those defaults are pretty good. Okay? Direct DB connection, I'm not going to use the direct debit connection, but if you are using the ZAB YX data source long-term and you're finding performance is slow. You can increase performance by using the direct connection. What you do is create a mask your data source, and then you've selected, I don't have one specific facts in this case, but you would credit it would be there in my notes. I have instructions on how to do that. It's very similar to setting up the original MySQL data source we did at the beginning of the course, you have to log on to your server where Moscow was running. Create a specific user with read-only permissions that the Godfather user interface can use to connect to the database. So on this page there's some example scripts and then you also have to allow external connections for that to work. But anyway, I won't be using direct debit connection for this getting simpler, okay, so save and test. Okay, I'm going to get a timeout. My cervix server has a firewall on it and it's blocking access to all IPs except for a few. So I need to add an IP rule to allow microphone or server to connect. Okay, So after some time, I've got the fiber for gateway timeout. So I'm going on to my firewall. It's on Digital Ocean Eyes epics firewall. If you note that my URL of my API is HTTPS port 443, I'll show you that. So HTTPS, that's because I've set up SSL and a domain name, almost every server, you might not have done that. So I will be adding DIP rule to my HTTPS minimum. So Edit Rule or add the IP of megafauna server. Press Enter and say, Okay, Let's try that again and Grafana save and test. Okay, cervix API version of 5.2, you might be using a different version of cervix and that, anyway, that works. So that's good. So go to explore. And up here you have a new option for cervix. Now, the query for Xbox, like all data sources, is different. Once again, these data source has a different way of querying it, okay, so you may see information in there depending on how many servers or in groups that the API user can query. I don't actually have anything right now, but in the next few videos, I'll demonstrate querying this and we'll set up some dashboards. Excellent. 38. Import Zabbix Dashboards: Okay, There's Abby's data source comes with some pre-configured dashboards so we can import those. Now before you start, you need to make sure that you have some hosts set up in your zodiac server. I have several hosts set up and I can see the data cervix. I just so happened to install the agents on my grafana server, my MySQL server, and also mine flux DB server. So I have three ZAB excitons that I can query and all my firewall rules are set up so my savage server can see H of my other servers, and my other servers can see Does having server on the appropriate ports. But there's a big detail. If I gotta monitoring hosts since epics, I can see I have the three service and I have them set up correctly. Now also in Grafana, explore with your cervix data source selected when you put your cursor in two groups there you should see a list of groups that your API user can see from the cervix right now, I can't see any groups. So that indicates that I have a permission issue in cervix for my Grafana user, I set up for the API. So going into the cervix administration, uses microphone I user permissions, permissions, all groups, permissions, none. I need to select some groups that this user can see. So over here in user groups, no access to the frontend as the group that microfinance user is in. Permissions are now select some groups that I want to see. I wanted to see Linux service. That's the group that contains my hosts, but also wanted to see Sabbath service down there. I always have is I'm excited and willing almost every server. So I'll be able to see that data. But you can select all of those groups if you want or you want to have custom groups, depends on your cervix server setup. Select. Okay, I'm going to give read access to those groups. Press Add, okay, it's now in the list. I can change those if I want. I can read, write, read, denial none. I'm giving them both worried. So update that. So no access to the front-end, just confirm permissions. Read, read on Linux as epics, excellent uses to get back into the user Grafana, better permissions. It now shows, now if you refresh, you'll need to wait a few minutes before this will update the Cape going out and coming back in. And once you see the groups showing up in this drop-down, when you select it, you can continue. If you're worked with Xbox, you'll know that things don't happen instantly. Everything you do is put into a queue and it might take several minutes for voice of the change. That's something that you learn to deal with when using subjects. If you find it staking quite a long time, go into data sources, cervix, save and test, and then try again. And there we go. I can see both groups now. So that's good. I can now just do a quick test. Linux servers. I can select individually or all applications CPU, use a time, and there we go. I can see some data. Excellent. Okay, so now let's get some dashboards. So data sources, ZAB IX dashboards, import, import, import. So we can look at them. Zab IK system status, okay, so we can see system status about those things that looks like it's working straight away. Let's look at the next one. So dashboards manage. There was so big. So say now holistically ZAB server dashboard. Okay, there are some that don't work. This will happen from time to time. It depends on the different versions that you're using. I'm using Grafana eight with ZAB expired point to end the lightest Z plugin, I will have to go through each of these individually and correct them. The first one on Troy cervix processes here, Let's try that e, so shiny, can't find that. So let's try a different query. Busy doesn't find busy. Let's try something else. Number of processed values per second. I'm going to say, okay, I've got something number of process values per second. Apply that. Because processes CPU, the CPU idle, stri, CPU idle time. So it's written like that. Now I apply straw, it Zab explicitly processes a once again process number of processed per second. Okay, It's good. It's a graph these tone rather than a table. So I'm going to say that's okay. We could change where that sits. This is using an older version of the graph here, legend. Underneath. There we go. And apply that under the sign with the top one there. Legend underneath, fly are good. Now let's look at record performance in VPS. Number of values per second. How many is the same as before, number of process values per second, A36, one haploid up time. Okay. It's not giving me a clue where it's broken, but let me just try up time. No, Let's just make this a wildcard. I go and try up time again system upon. There we go. It's 35 weeks. Ply. And let's try this first one host name. It's not. Give me a clue again, of Troy the wildcard. Therefore, everything. System name, click out of that table. Time straw wildcard there. Anyway, I'm not really sure what the answer to this one is. Host cervix, general. And you also applaud anyway, maybe with different versions of his Abby, she may have different lock that anyway. Let's save that. So overwrought. Dashboards manage setbacks, template Linux server. Okay, so this is showing mostly some data. Linux service, service servers, lock service, host influx, MySQL. This is where I've installed cervix agents and Grafana influx MySQL network interface or have a look at system loads, see whether we can fix that one. Host group, CPU, processor load, load average. One minute. We go apply. Excellent. Because that's the last 15 minutes. Very, very good. Like I save that save and dashboards home. Okay, So we have some dashboards and we can now use Excellent. 39. Elasticsearch Data Source: Okay, let's look at elastic search data source now, it's another monitoring solution which has recently become quite popular for these all installed version 7.16. Also, it uses the Java VM, so it's going to need a minimum of two gigs of RAM for the Elasticsearch Service. So I'm going to get myself another server and one digital ocean create droplets. I'm going to use the B12 20 points. There are four. Basic is $12 month two gigabytes of RAM, which is perfect. I'm going to put it in Amsterdam. I'm going to use the same VPC that I've been using throughout the course, missing my SSH key, and I'm going to call it elastic search, K, crit, droplet. Okay, so I've got my paint now, copy that. I'm going to set that up and party Elastic Search, save SSH, Hey, parents, and go back. And just like that again, Hey, Elastic Search. Hey, I'm on my new Elasticsearch server with two gigs of RAM. Okay, so I'm going to install db and package from this address here. I've already copied the commands, so let's copy that line. Okay, paste into the next line. We need to install the dependencies. It's actually already there. Next to save the repository definition, I click Enter. Now to run a paid to update install Elastic Search. And we can check the status. Okay, so it's loaded, but it's not running Control C. Okay, we can start it. It usually takes 30 seconds to stop and double-check a status. Okay, so it's running very good. If you're getting errors, you could run that line to see what they were. But minds working, we can see that a new user was credit cold Elastic Search and we can see what processes it's running, Kay, says running Java and control, okay, and we can test that it is running locally by doing a curl request and sing it would get a response for nine to a 100. Now go, There's a response. So name elastic search, et cetera, and a few other things. Okay, now we're going to need to modify Elastic Search configuration. That is because my grafana server will be connecting to the Elastic Search server, which is a different server. So I'm going to need to allow remote connections at minimum case. So let's sit into the folder and see what we have k. So it was installed in ATC Elastic Search. So let's LSL height h, and there are a few falls. We need to edit these Elastic Search YAML. So sudo nano Elastic Search dot YAML. Okay, so scroll down, uncomment that. So cluster name, my application. I'm going to leave that as default node name, node 1. I'm leaving that as default. Network host change that to 0000, 0000. So it bonds to all Ethernet interfaces. Uncomment nine to a 100. That's the default anyway. And down here cluster, initial master nodes and innate 1. And comment that, OK, Control X. Yes. Okay, So those changes have just written there my documentation, we now have to restart it. A Elasticsearch. Restart. Okay, that's good. Sometimes we're errors. You can inspect those by running that line up there at that works. Okay, so that's what I've got running Elasticsearch server now, now we don't have any indexes in it. It needs to have an index in it. So let's create an index. I'm going to call it index one. A clear. Okay? I'm putting index one. Okay, acknowledged true index1. Let's view the metadata. And so, okay, so there we go. That's all about index one. That's good enough. Okay, we can add some data to index one. Okay? I'm adding a row called ABC123, nine x or x and a timestamp. That's good. We can view the contents now the index. Okay, so source ABC123 dynamics was it and that's a timestamp. So we have some data in our index page. We can view the indices or indexes. And we can see index one exists. Okay? Yeah, if you want to delete index one, you can run that line. So I'm not gonna do that. I'm not gonna go into Grafana and create the elastic search data source, or data sources. And data source, hey, scroll down Elastic Search, select the address of my Elasticsearch server was http slash slash. I'm going to use the VPC IP address and not the external IP addresses. I'm going to block that off using the firewall eventually. So networking, VBC go down to Amsterdam or view members. My Elastic Search, I paste that. So 13306, copy that. Colon nine to a 100. Okay. My index name was index1. Timestamp is correct and the version is 7, 10 plus. Now let's save and test, okay, and networked index Tom field name. Okay, Excellent. So let's go into explore. And up here we should get elastic search, okay, so I can see some data straight away. But instead, select for metric, select raw data that shows the table. Now that first row is the information by using the kill statement before ABC123 name XYZ, and a timestamp, we can add another row. So going back to my documentation at some data to the index copy that, go back onto your Elasticsearch server among Elastic Search server, just going to right-click to paste these. Tom, I'm going to put something different, such as a, B, C, D, If 4, 5, 6, 9. Interesting, I like. And the current timestamp, which is date, i seconds. Excellent, successful one. For a good back into Grafana. Run that query again. Okay, got two rows. So you can see now that I have an elastic search server running and I'm able to import data into that and I can read that. So the Explore tab next videos, rather than putting data into Elastic Search like that, I'll install two different services on different computers. You can have them all over the place. It's very similar to loci or Prometheus in this way that will collect data and push data into the Elasticsearch server. Okay, so that's the next video. Also note that you need to manage firewall rules when you use Elasticsearch and so on the Internet. Because as you can see, I can easily just create indexes and add data to them. So I'm going to set up my firewall rules now. Okay, so if you don't have a firewall block I doing digital ocean, you can use IP tables or whatever firewall service your collaborative water gives you. So you would allow localhost port 9000 and you would allow your Grafana server's IP port 9000, for example, and you would drop 9200. And for everything else, I'm going to use the digital ocean firewall option. So I'm going to allow my grafana server on the VPC IP to access Elasticsearch server. So I'll create the firewall now. So firewalls credit firewall. Unless Dick search, I'm going to create the new rule. Custom nine 200s TCP. Get rid of the defaults, paste microphone or service IP addressing. Like that. I'm also going to wall I'm here, get rid of these on my SSH and use my external IP address that I'm using to create this video. And I guess so only I can SSH onto that server and only megafauna server can send TCP messages report nine to a 100. Okay, so let's apply that to my Elastic Search server. Hey, Craig firewall. Okay, So only my grafana server that's adhere can make those requests to 9000, so on query and there we go. Excellent. Okay, So anyway, next video, we'll install file Beta, which is good for reading system log files Pro, similar to the loci from tails service, but for Elastic Search, Excellent. 40. Setup Elasticsearch Filebeat: Okay, so that we have something more interesting to query through the elastic data source in the next two videos, or phosphate and metric big collector, those are the most common collectors that you'll find for Elastic Search. So I'll quickly show you how to set those up. So this is not a course in Elasticsearch, but I'm just showing you enough to get started. So we can install file be on any server. We'd like Linux, Windows, or Mac. I'm going to install on one of the existing servers. I will put it on my MySQL server and we'll collect stats about the MySQL server and fall beats good for reading log files. So consider the Elastic Search fall beat equivalent of prompts is for loci. So we'll set that up to read the system D logs of a MySQL server. Okay, so I'm going to need to install fall beat. I'm going to install 7.16. So I'm going to go into my MySQL server. I'm on my MySQL server route of MySQL. You can get the download information for your operating system from this link here, primary prepared those commands. So first one, curl, downloading the Debian package and they're going to use the package manager to install it. Okay, So doesn't store, it shouldn't be running sudo service for beats status. Okay, so it's not running, doesn't matter. We'll make some changes to it. Okay, so cd into the file system, ETC, far as we are also stored, okay, So ls, l h, so we can see what's there. Okay, so now we need to enable a module for fall beak to run. We can get a list of modules that it knows about so far, beak modules list. And it's got a whole bunch of configurations that we can enable, such as system, which we'll use in a moment, Redis Rabbit MQ Postgres, many, many things. But if you scroll up, you'll see that they're all under the disabled hitting there. We're going to enable system. So far, beak modules enabled system, okay, Enter. Okay, enabled system. If we looked at again, following modules list, if I scroll up, it shows under enabled system, we can enable several modules, but I'm only going to use the system one if we've got ls, l h Again there, and we go into that module PSD folder there. So CD modules d and then do ls. We can see a whole bunch of foils or configuration files. Why mLs? Then we'll have the word disabled after them. System YAML doesn't. We can inspect all of those different configuration files and see what they do if you want. For example, I can look at system one, system YAML, and that's what it says. So if you want to know more about the details of Elasticsearch file beat, there's something to look out there. So I'm going backwards. Cd dot, dot ls again to see what we have. Okay, I'm now going to change some settings in phobic YML to tell it to send data to our Elasticsearch server. So sudo nano four beat why ML? Okay, if I scroll down, you can see there's going to be searching the var log, log folder there. Okay, four beat. Okay, this is a fall beats module section. You can see that it's searching the modules D folder for everything started YAML to find out what's enabled. I'm not going to be using Kibana. Kibana is like a user interface with our Grafana. But for Elasticsearch, elasticsearch, this is where we tell it the address of our Elasticsearch server. I'm using the VPC IP. So my Elastic Search server's IP is 10, 13, 3, 0, 6. So this fall beak wearing my MySQL server will be sending to that address colon nine to a 100. Now I have a firewall or more Elasticsearch server. So I'm going to send those messages, but I'll enable that in a moment. Hey, right now, I don't need that or that, or that either. But you could leave those if you wanted to see what they do. They add a whole lot of extra items into the result set, which are unnecessary unless you're actually using them. So going further down, right to the end. Good Control X, save, yes, Enter. Now let's start following sudo service for beat start and check that status. Very, very good. Okay, so now it's running. It's trying to send data to the Elasticsearch server. Elasticsearch server has the firewall blocked. I can verify that by trying to do a curl request to it from the command line. So here it's just going to time out eventually Control C that, okay, so now I'm going to go into my file settings from Elasticsearch server and allow my MySQL server to make requests on port 9000. So my MySQL servers IP is 10, 13, 300, four. So copying that firewalls Elastic Search viable. Okay, I'm going to also allow, going to edit that role and going to allow that IP address as well. So that's 03, that's megafauna Server making queries to port 9000 and disarray force my MySQL server sending data to 9200 and save. Excellent, Going back on hands to that curl request again. Okay, got a response. Okay, So that's a default response from my Elasticsearch server named one of the cluster name my application. Now fall beat has created its own index. So in the video before, I credit it an index called index1 manually and added some data to it. Fall B is doing essentially the same thing, but it's creating its own index and adding its own data. We can find out what the index name is using, underscore cat indices there. And that new index is called vol beat 716 plus today's date. So I'm going to set up a new data source and Grafana looking at that index, okay, go to data sources. Now, I could edit my existing Elastic Search data source because I don't need the index one really. But instead, I'm going to create a data source elastic search. And I'm going to call this Elasticsearch the address. I'm using my VPC IP address colon 9000 server. Okay, Very good. My index name, if I look here, when I queried cat indices, it says Fall beat and all those numbers. Actually we can use file beat and a wildcard, so I'm going to fall beat 716 and dot star, I could say seven star or fall beats star or to say that dot star. Now, if I set up fall be on other servers, dial all have a very similar index name. So this data source I'm sitting up here named that will actually read all the indexes starting with that. Okay, so that's good. I'm also using 710 plus. That's important, Okay, and save and test. Hey, index. Okay, Excellent. Let's go to the Explore tab. Explore, select elastic search for beat, and we can see some data. So there's a lot of information coming through already. If we look at the logs row there, and I'll just make that query smaller. Five minutes, we can see lots of data coming through. We can expand each of these rows and see a whole lot of information. Okay, So host name that's coming through as MySQL. If I had lots of foul beats running, all pushing to the Elastic Search server and using a index prefixed with 716, I would see all of those as well. And I'll be able to filter by host names. So for example, host name. My SQL shifted to not only get the results, the MySQL, but I only stored on MySQL, so I'm getting the same results anyway. But anyway, that's just a quick introduction to fall beat. We can start creating dashboards from that, but I'm not going to do that yet. I'm just demonstrating how to get some useful elastic search data integral phi. Okay, So this is what we have full beat, I've installed that on my MySQL server. I could instill that normally servers if I wanted to, you can do the same. I'm not gonna do that just yet. In the next video, I'm going to install metric bait on another server and demonstrate carrying that through elastic search data sources. Well, excellent. 41. Setup Elasticsearch Metricbeat: Elastic Search metric beat. Now it's pretty similar to follow. The steps are almost identical. But metric bait is about metrics on your systems such as CPU disk IO network, whereas fall Bates about reading log files to download the installer for your operating system. I'm using DBM basically by two, 2100 for you can see that at that link. So it's like self-managed and selects your eyes. I've already prepared my commands, so I'm just going to copy those to clipboard. I'm going to use the same server where I put phosphate and that was my MySQL server. Okay, so clear paste. So it's downloading a 7.16.1 AMD 64 and press Enter again, and it's now using a package manager to install it. So it should be there now we can test that is most likely not running. There we go. So it's loaded but not active control C. To get out of that, we can see what modules it has enabled, disabled copy that metric big modules list. Okay, so it's very similar to fall, but there's lots of different modules that you can enable. Most of them are disabled, but on metric bait system is already enabled by default. So I didn't have to enable it. But what I will do is also enable the Linux module because I'm using Linux. So metric beat modules enable line ox and they would like so let's check that modules list enabled line axis system does not essential, but I'm just showing you that it's possible. Okay, Very good. Go to the bottom. Now, I'm going to edit the metric big YAML to point it to my Elasticsearch server. So that is in CD, EDC metric bait folder ls, l, h, c, like Fall B is going to say modules d directory. It's also got its own YAML, so we'll edit that. So sued or metric be YML. Scroll down. I'm not using Kibana. Okay, The host of my Elasticsearch server, and it's 10 dot 133069200. I'm not using that or that or that. Very good control X Yes, to save enter, Let's start metric. Okay, check a status. They go active running as some info flight is, press the right arrow. Very good control C to get out of that. Now, I've already set up the appropriate firewall rules in the last video, but you should make sure that your metric bait service can query your Elastic Search server by running a few commands. For example, curl, the IP address of your Elasticsearch server, which mine was 10 dot 10336 Enter and I've got a response. Excellent. Also want to know what index was created when this metric bait started up to that underscore cat slash indices. Here it is, cat indices, and there we go. There's the new index metric, bait 7.116 and today's date. So I'll be creating a new data source, the points to at least metric beats 1716. So I've been governor data sources. This added data source, elastic search, the Euro and 133069200. Hey, go down. My index name was metric bait 716 dot star. So that means I can set up metric Bates version 7.16 on many servers and point them to my Elastic Search server there and read all of those in Grafana version 7, ten plus OK, save and test. Very good index like and also before I save it, I'm actually going to rename it to metric base. So Elasticsearch, metric bait like that. So save and test. Yeah, all good. Now go to explore. And we can select that from drop-down. Excellent. And we can see lots of data coming through or a, so that's possible to query. Now let's look at logs. And there we go, we can see lots of information. So the information we're going to find here is a bit different. It's not about log files, is about the performance of our server. Gas to CPU, memory, fall descriptors open and various other things. And we can create a trusting dashboard format and we'll do that in the next video. Excellent metric bait. 42. Setup an Elasticsearch Dashboard: Okay, let's create a dashboard for Elastic Search now using both the fall beat and the metric between data sources at the same time. So that ID is 1 to 6 to 6. We get that from there. And that's a sample of what we'll see so that you know where this comes from. I'll go to the Grafana dashboards pages. Select Elastic Search and select collectors as beats. There's a lot to choose from. If I move down, I'll say our stats Linux. And that's the one I'm installing, one to six 26. So there's a lot of dashboards that you can choose from, but you can experiment with anyway, go into Grafana. Dashboards, manage, Import, Paste that load. Very good. Let's select our metric bait data source is metric bait and the fall bait Data Source, press Import, Paste. I'm I set that down to five minutes. Okay, So Server Overview, CPU stats, memory stats, process stats by all the processes running on the server. Uses stats, file system stats, disk stats. There's no data yet. We'll come on to that. Network stats and logs. So logs for the last five minutes. Let's look at disk stats. Okay, so happen that if we press E and have a look at that, we can see that's the query is looking for something called disk. Again, back to the Grafana dashboards page. Here. If we scroll down a little, it says we need to configure our system module to have these metrics. It's okay, so metric self.name, safeguard your server where you installed metric bait for me, that was my MySQL server. So if I go into the folder where it's stored, CD ATC metric beat ls, l h, go under the modules D folder. So CD modules d, ls, l h, and there's system YAML. I need to edit that file. So sudo nano system YML, here it says module system and these are the metrics sets that are enabled, those ones there. I also need to enable disk IO service and users. And I'm just going by what's written on this page here. It's not asking for soccer summary, but I'm just going to live there anyway. Okay. So scroll down Control X to save. Yes. Okay, let's restart the metric. Beat, sudo service metric beat restart. Okay, let's check the status. Okay, it's looking, okay, very good. Control C. Now, before I go back into Grafana and at least check that in the Explore tab. So going back out into Explore, also, the dashboard will look at that for the last five minutes and then type metric set name disk IO from query pay. So I'm starting to get the sky data through now. I wasn't before. So I've looked at that for the last 30 minutes, so it wasn't so, so now I am. So let's go back into the Dashboard. Our stats, Linux disk stats, and I don't see anything yet. Let's change that last five minutes and just refresh that for every ten seconds. And at some point will start seeing the data. Okay, and now we're starting to get some information through. It takes a few minutes to catch up with after a day or two, that should be more interesting to look at. Excellent. Okay, so that's quite an impressive dashboard, elastic search dashboard. So once you look at that, and anyway, Excellent. 43. Dashboard Variables: Okay, Dashboard variables. Dashboard variables allow more interactivity with your dashboard so you can dynamically update them. Many things can be turned into dashboard variables. I have a whole list of them here that will go through. But we'll use all the data sources that we've created so far and create different dashboard variables from them. So you can see it's quite a few there. So what a dashboard verbal looks like if I look at the node exporter dashboard here, these drop-downs, they are created from dashboard variables. So here we have some values dynamically added. And if I change one of those, took a foreigner, then all the dashboard updates depending on that. So if I select influx DB, getting information about the influx DB, serpent and MySQL, for example. Now each of these visualizations here are reading the value behind this particular dashboard variable to know which host to get this data from. So if I look at this table here and just press E, we can see inside the queries is a node that is referring to the dashboard variable just here. So we'll create dashboard variables now, looking at all of the different data sources that we've created so far. So go into dashboards, manage, discard any changes you made in that dashboard, credit a new dashboard. And we'll go straight to dashboard settings. So I'm going to call it a dashboard variables. Okay, save that. Go back into Settings. Now, there's this option here called variables. Added variable. We have many types of variables. The first one will be a type of intervals, interval and it's precreated a list for us already. I'm going to call the variable interval, that 99 interval. That means within my visualizations, I'll refer to that as dollar interval. But I want US dollar in the name here because it's done all the correct, But I'll demonstrate using these in the next few videos. But down here you can see a preview of values. We can add more if we like. So I can say, give me 60 days or one year or ten years. For example, 60 days, one new 10 year, I can say give me 10 seconds and 10 seconds is listed. So if I press update, that is now showing up as an integral, if I press that back arrow, that drop-down now exists in my dashboard, I can use that dollar interval in my visualizations. Also. See here how got the word interval and it's a lowercase I. I can actually improve on that presentation slightly by going back into variable, clicking it and saying the display name can be interval. So I'm on display name, I'm using a capital I, that my variable name is a little bit. So if you ever want your variable names to be more programmatic looking, such as by interval, but you don't want to show that as you label, then you use a label here. But I'll keep that just a lowercase. I like that update. So that's how those dropdowns accredited. That's a very simple one. The next one, we'll create a custom. So a new custom, I'm going to call it a custom, that my label is going to be customer. A custom variable, smartest scription, Comma Separated Values. Covid, just copy that. A, B, C, D, E, F, and I go, that's the preview down their M&O 1, 2, 3. And using unlike in case there's the preview down there. Now, these multi-valued and include all option. Let's have a look what we have first. So update that, go into there. And that's the custom with all my custom comma separated values, go back into it. Variables. Customer, if I select multi-valued and press Update, go back, I can now select one or num or two, or all of that, or all except for one. Now, looking at variables given the include all option or that does, is shows and all option like that. So that means all. Anyway, moving on a data source type, okay, So variables, the type will be a data source, call it data source, and my label be data source looked at a type, again, referring to which data sources we have configured. I have quite a few. I'm going to use Elastic Search for this example. And down here it's showing me all my elastic search data sources. So if I update that view that I can select whichever data source I want to use for a particular use case. I go back into that variables looking at that, you also this instance name filter, these are regex is, most of the variable types have this option. So what I can say is I can say give me everything with the word file in it. So I might have many elastic search data sources configured and there might be something similar, such as the Word file for example. So I'd say give me everything with the word file in it. And it's given me fall bait or give me every single of the word metric. Or if we look at this first dot here, that means anything before the metric. So if I was to type elastic, for example, elastic, that wouldn't show anything. Because it's looking for something before the word elastic. So I've got that dot away. It's now showing me everything starting with Elastic. Give me everything ending with beat. There we go. Fall bait, metric bait, or just give me everything. But that's the sign was just doing that. Excellent. So update that Elastic Search fall bait. Okay, Next one is still MySQL query. We can run a direct MySQL query on our data source. So variables New, this will be a query. Okay, I'm going to call this username optional. Use a name. I'm going to use my MySQL data source and the query select username as metric from the example simple table that we set up at the beginning of the course when we were looking at non time-series data ordered by id, if I click out is show me all the names from that flat static type, will we credit or a good update that and username. And you can dynamically update your visualizations based on that variable called dollar username. To stay, dollar username would be referred to it. Anyway, Let's create another one. Okay, we'll do some lucky queries. First one, label names, I'll call it nine query labeled loci names, loci, and that's my query, label names. And it's just given me everything you can find under libel names, filename, host, job, update that. So the dashboard side and we got unexplored, uh, loci, data source or loci. And if I select log browser there, they're the names there, filename, host, and job. I'll create a variable with those three plus a system level name. So manage dashboard variables. We'll edit that variable's labelled names. We get the x one down here, which is a system level 9. We can hide that be a little bit more tricky with this one. So that's an example that means in rejects the start of a line and where he is saying not. So we did the same thing for host. So everything except host is there or job. Okay, So if you want that, I don't want to be there. They'll go update. So we'll look at that. So lucky dimes, There we go. Okay, Another one, dashboard settings variables. We could do. One for host in low-key query loci, host, loci. The query is labeled values host and I go Grafana MySQL, my loci has to prom tiles pushing data to it as POME tails are on the godfather of MySQL server. So then we're going to update that. We're going to save my progress. Let's look at it. Low-key hosts, Grafana, MySQL. Okay. We can do the same with job. This is going to Prometheus query. The queries are very similar. Dashboard variables, New, I'm going to call this prop names query from atheists, limes. It's gonna use Prometheus, the query label names. There we go. There are many of them. We can update that are saved the dashboard. Look at it. Prometheus gnomes, I go that list there that comes from the floor. Explore and look at the Prometheus data source metric browser. These are all the prom names there. Okay, so going back into the dashboard, dashboard variables, edit variables, I can exclude that first one, underscore name. There. There we go. Like so. Very good. Update. Let's create another prometheus. This one will be prompt job about label values job. Okay? Sure. I label Prometheus level values job. So all the jobs that I've set up, information, Prometheus, not exporters, have various jobs for fauna. Prometheus. Update. All right, good. Let's have a look at Prometheus jobs. Can, Let's do another one now that we're more sophisticated, prom host, the variable drop-down will update depending on whatever the value of ProM job is. So from that one just there. So let's copy that. We'll use that from host. I'm going to call it k. So variables, new host, it will be Prometheus. And that's the query. And it's using ProM job from the last variable, I credit, which is called prom job. But I've got the dollar sign in front of a load that to indicate that as a variable. And it's showing me influx DB, MySQL, and Grafana. So if I update that, okay, well before I'll do that from ACS, update, Kayla's that now cause a node is selected, I have not exporters on the servers in flux debate MySQL microphone. Okay, So if I'd select Grafana. I don't have any of those other servers with Grafana job on that with Prometheus? I don't have any of those other servers with a Prometheus job or not. But if I had node, these are the three servers that I have a node job on. So that's a variable that changes depending on the value of another variable. Okay, I'm going to save that. Let's look at influx DB queries now, this is in flux Debbie version 2, which if you're familiar with influx DBA one, the queries were much simpler than that. Influx DB2 is quite a lot more complicated to use. But anyway, here's some examples that you can refer to my copy that called source. So let's create a new variable query. I'm going to call source in flux DB as looking at your flux DB data source that we credit early on in the course, and that is a sample query. Remember I created a bucket called Telegraph with some SNMP measurements. So if I click out of that preview of the values Grafana MySQL on your flux dB. These are the servers where I stored the SNMP demons and set up telegraph to query lives. So let me go to squatter more complicated query update. Let's have a look at that source for foreigner, Moscow and flux dB. We can modify that name to be in flux dv source. So some more informative flux DB source to another influx DB variables. This one will be interface, but if we look at this, it's using the source variable that is credit here. So I'll copy that. Interface is called Interface. It's a query. And Fock state be in a face. This is why as the interfaces in flux dB and we go and it's using the source from before. I click out. Most of those servers that I'm getting from Digital Ocean have two interfaces, low and a red hat device. One update that I look at that low, it's also a dynamic variable because it's using the source variable here. So if I change that to MySQL, I'm also seeing Docker because I have installed Docker on my MySQL server as well. And flux dv, I need to like that. So that's another example of a dynamic variable based on the value of another variable. Let's save that and look at another one. Okay, z by x. If you set up the Xbox data sources variables, this is a beaks click out of that. It's shy me, Linux servers cervix services to the first query. But I'm going to select the type being host with the group being that, and the host also being that. It's now show me all the hosts are I have set up in his epics, goes down there. I'm going to call that hosts the variable I'm already exists our cervix host, I'm using CamelCase, their query cervix host and CYP that. So we'll look at it or my cervix hosts. So I could have dynamic tables and graphs, stats based on whichever one I click. And finally, we'll do one for Elasticsearch as well. We'll look at the metric bait data source and we'll find everything to do with system d unit variables. We're gonna call it unit query, Elastic Search, metric bait system d unit. So it's finding terms with the field system do unit. So we could have many of those such as host name. And there's one they are about to install the Elastic Search metric bait and one of my servers. But let's go back to system and there's a lot more information. Ned, look at Shamil so we can look at information about Docker service if we had it, Cron log rotate, update that. And we're good. Let's look at the dashboard and we have 14 unit and that information is coming from elastic search fields. Excellent. So there are a whole lot of examples. There are different data sources in the dashboard variable section, the best way to know how to create days is to look at other examples of dashboards and reverse engineer them. There's no real easy way to know exactly what you should write here as a trial and error process, I've taught myself how to do all these basically just looking at how other people have done it. So you guys all a lot of examples. Anyway, in the next few videos, we'll actually start using these. But more specifically, I'll be focusing on the Prometheus dashboard variables. We'll be creating our own Prometheus dashboard or dynamically updates based on which dashboard variables we are selecting. Excellent. 44. Dynamic Tables from Variables: Okay, let's start using these dashboard variables in dynamic visualizations. The first visualization we'll get is the table. Okay, so let's create a new dashboard. So create a new dashboard. I'm going to add an empty penalty, make it a table, and select Prometheus. And I'm going to use what's called node. You name info. And that's the query down there. No, do you name info? Use that query. Okay. My little larger k, I'm going to set that to use table and instant, so it shows one of each. Okay, So I'm seeing the instance address, the job which is node, the node name in flux DB, MySQL final. So I will create a dashboard variable that lets me choose one or more of those and there's more information there. Okay, so apply that, and that's our table. Now it looks really good. Let's now add the dynamic variable so we can make this table more dynamic. So Dashboard sittings, variables admirable and it's going to call it host. Liable for Mencius, the query label values, No dude, I mean vote, job node, note name. And there's my three servers that I have node exporters running on and they all have their job named as no. So update. Okay, go back to the dashboard. We now have that, but if I change that, it doesn't really update anything. So it's changed this panel here, press E or modify this query to be explicitly job equals node, but also knows name down there, equals, it can be one of those, say in flux DB. And it shows me one that I want to use this variable here. So instead I could say dollar post, now that we'll use the variable up here. So if I change that to MySQL, this information changes or grafana, the information changes. So you know, Foner, MySQL in flux dB. Okay, We can also make this a multi select option. So this just apply that, go into dashboards, sittings, variables, the variable and select multi-valued update that, go back out there, that's now multi-select. But if you use it showing no data, let's edit the table. We need to do is convert this to a regex filter so that tildes just there. And now it works based. So I can have all three selected or none, or just one. Very good. So that's the query that creates that table. So apply that and go Select all that's done. Now this table is quite why does a lot of information I'm going to edit transform to get rid of the unnecessary columns. So go back into that, press a press Transform here. Scroll down to organize fields. I can hide name, domain name, and also hide job and also value at the end. So apply that. And that table now fits the screen and we've been on. Excellent. Now we can use this dynamic table to draw the tables slightly differently as well by to draw a new table on the screen for every one that is selected. So let's do that. I'm pressing a down the bottom right here. Under panel options, we have one called repeat options. So select, post. It's finding the dashboard variable up there and giving that as an option for me. So most and the repeat direction, either horizontal or vertical, up and down. So selective vertical. Now, press Apply. Now, if we just de-select everything, now, select a single one, enter, now select a second one. It's created a new table down there, so we don't actually need those to be so big anymore. And Grafana, and we've got third row now. Now we can also dynamically update the pedal title. So if I go e into this first one, I can just delete that and say dot hosts. And it's now tell me the name of the one that is selected. Here I have three selected but say shown the first one. But if I go out and apply that and then just de-select everything and then select one yes, as MySQL, if I select a second, co-founder says Grafana and flux database, that's using dashboard variables. And I've shown you two ways of doing that. It's up to you what you prefer anyway. Excellent. 45. Dynamic Timeseries Graphs from Variables: Okay, dynamic time series graph. Now let's create a new dashboard. Dashboard. We'll start by creating a graph time series. Prometheus, let's create a graph using node, network. Receive bytes total job node. And since I want look at the changes over time, not just the total, I'll use a right. And I'll select the rate as one minute. Like the bigger the graph is quite busy. What I'm seeing is the bytes received for every Ethernet device on my three servers through the node job. Okay, so let's apply that case. It's quite a busy graph. So I can make that less busy by first creating the dashboard variable. So we'll create a dashboard variable. So dashboards, sittings variables at variable to call it instance Prometheus. And don't want the label values and we're gonna use no new name info job node. And I want the instance. There we go, got the incidence there. So these are the addresses of my Prometheus node exporters from the perspective of the Prometheus service, which happens to be running on my grafana server. So let's say what we have. Excellent. So we have a drop-down. Now, let's make it work. Okay, so let's go into the handle. Okay, let's filter that further. Boy, instance, a cause to use the tilde, so rejects dollar instance. Okay, and that's referring to that drop-down there. Okay, let's apply that. Have a look. Okay, so there's less busy now, and that's just focusing on one server at a time. Okay, So it's better. I also want to see bites transmit. So let's have a look at that. So I'll create a new query very similar to that, except it's using, using transmit. Okay, so apply that case. We're seeing receives and transmits or ins and outs for HD's estimate devices, that's quite busy, so I can improve that as well. So a, for the legend for the first one, if you note here we have a label for device, all right, that label instead, and I'll call it device in. Okay, So it's in S1. And for this other one, I'll do that part out. Okay, so it's less busy now, more so going to put it on the right, like so legend. Okay, so that's looking pretty good. Now, let's make that multi-select Edit Variables, instance multi-valued and include all update that. Let's go back. Now. I can select all and well, zoom into that perhaps or even into there was still very busy. I will now use the repeat options and we'll have one graph for every instance. So go back into that. And for Panel Options, repeat options, choose instance and vertical that apply TO now if we just de-select everything and then select all again, and we get a graph. For each instance. I'm just gonna make this smaller. And there we go. So one at a time or I guess the last thing, oldest update the panel title. I'm going to call it network dollar instance. And we go upon case. So just make a change, so everything refreshes and we go. Okay, so that's the dynamic table, one table per instance that is selected. So I have all selected to have just two. Oh, very good. No. So that's a dynamic table using a dashboard, verbal Excellent. 46. Create an Email Alert Notification Channel: Okay, so most data sources that you sell in Grafana will have their own alerting solution. We can receive it. You may also SMS or other push notifications when something happens. So loci and Prometheus can both be set up to use the alerting manager that comes from Prometheus in flux, DB has its own alerting solution. Mysql can send emails. Zab IX has a very sophisticated alerting system, and it says Elasticsearch, Kibana UI, but so does Grafana. Grafana can also send alerts depending on what it sees in the data that it's receiving. Before we can set up alerts in Grafana, we need to set up an alert notification channel. And then once I've got that set up, I'll send an email. Okay, So go into Grafana and down here we have dispel, that's alerting way to set up a notification channel. Okay. We don't have one, so we need to set one up. Yeah, the first one, I'm just going to call email to email. There are many types. Ding Dong, discord, Google Hangouts, HipChat Kafka, many amusing e-mail first, and when I've decided that I should send an alert or want to send it to my email address admin at SBI code dotnet, for example. I could have multiple recipients separated by a semicolon and I can do a test. Now, that's correct. Smtp is not configured and either set that up, so I can save that. Now, profiler doesn't come with an SMTP server built-in and an SMTP server can send emails. So we can install an SMTP server on our grafana server place on a B12, we can use a program called male utils, so we'll install that. Okay, I'm on my grafana server now. Just know when you're sending e-mails from your server, your e-mail provider who receives the email is likely to reject it or put it into the spam folder. So in order to counteract those problems, it's best to be using a domain name. So I'm using a domain name. I've set that up, which was Grafana dot ASP code.net. So you haven't got a domain name, there's a good chance that the email alerts aren't going to work for you, but you can try anyway, but they're likely to be either rejected or put into the spam folder wherever you're sending that e-mail to, for example, Gmail is likely to reject it if you don't have a domain name anyway, listen still mild utils case pseudoagouti install male utils. To start a program called postfix, which will be the service that will be sending e-mails in the background. So continue. Yes. Okay. The Wizard, I'll be setting up as an Internet site, so male is sent and received directly using SMTP. But I'll be setting this up as a sender in the e-mail server so that it can't be used as a relay. Press Enter for okay. Okay. Internet site. Okay. My system name is foreigner, thought ASP code dotnet. That's because I've already said that to my name up. Okay, press down so that the IK highlights as red and then press Enter. Okay, now to edit the configuration file and scroll down, change my host name to your domain name if it's not already set. Okay, so for my net interfaces, we want this to be a send only server, so it's not used as a relay because other people can then use your server to send emails. So LoopBack often only. So only the server will be able to send emails. And for on-air protocols, IP v4. I personally do this because some email providers will reject emails sent using IP version 6. So this just ensures that my server is sending IP version four when it sends emails Control X to save, yes, and now to restart. So sudo service postfix restart and we can check its status. And it's active control C to get out of that. Okay, you can try and send an e-mail using that command down there. But I'm actually gonna go straight to setting that up in Grafana now. So I need to edit the grafana any file in signifier. So sudo nano ADC, graphene, Aquafina, any. Now, if we scroll down, we're going to scroll down to find the section on SMTP. This is quite a large file, so the quickest way to do that is to control w, which is like find, I'm going to type SMTP, the NS found the SMTP settings. So this semicolon is like a comment in the grafana any. So I'm going to delete that and write enabled. True, the host is localhost 25, That's my local server. Now, if you're in a corporate, you probably have a corporate e-mail server, so you don't have to sit up in SMTP server locally on your grafana server, you'll have to get the host address from your email administrator wherever you work. But I don't have that. I'm just using a local SMTP server on market fantasy, which are just installed user, I can leave commented out and same with everything else. Although skipped verify equals true, that's useful if you're not using an SSL certificate and on not to send. You can use SSL certificates when you're sending e-mails. I'm not doing that. Kate, my from address is admin at Grafana thought SB code dotnet. Now this is a reason why you needed to my name. When your e-mail provider gets the email, it's going to look up the IP address of Grafana dot ASP code dotnet and either decide to reject that email or excepted. It's still might put it in the spam folder, but did depends on your e-mail provider. I'm also going to sip from name Grafana and the ILO identity also being for finer. Control X. Yes. Enter. Now re-start Grafana. Restart. Which ticket status? Very good. Now going back into the alert notification channel is go into it and we'll send another test, test, a test notifications sent. Now, I'm going to go to my email provider. I don't have received my email from Grafana, so Grafana admin Aquafina is because dotnet. Now this went straight into my inbox, not my spam folder. I'm going to check your spam folder. Alerting Test notification, someone assisting the alert notification within Grafana, just some test inflammation. Okay. So note that that went straight into my inbox then. So I didn't have made problems in you saw me set that up, worked one of the most significant things that you can do to make sure that e-mails don't go into the spam folder is set up what's called pointer records on your server. If I look at the networking tab of more digital ocean, sit up and look at PTR records in here. I don't have a pointer record from microfinance is because dotnet server set up, but it's still worth me anyway. But if you want to avoid the spam folder, then one thing to try and do is to set the pointer records and update your pointer record. It says update you droplets host name for the control panel. So if I go to droplets, I'll go to my grafana server. Just here. I can click the name and I can replace that with foreigner dot ASP code.net. Before I enter that, I'll just show what it looks like if I don't do that. So I'm going to log on to a different server other than megafauna server. So for example, I'm a my MySQL server. If I type a host for fine art dot ASP code.net, it tells me the IP address, that's correct. But if I type host and then the IP address, the reverse lookup shows not found. There is no default domain name for that IP address. So many email providers will look at that sender's address and do a reverse lookup. We'd like that if that IP address of the sender doesn't resolve to the domain name of the URL, is going to reject that email or put it into the spam folder. So that's why you would set up a pointer record. So open-air commit this just by pressing the tic. Okay, the droplets been renamed that as actually updated the pointer record on my server could final speaker.net. So if I get into networking pointer records, I have a new one. That's the IP address for pharma is because dotnet. So now after some time, I can re-run that host reverse DNS lookup still says failed. If I give that an hour or two, that will come back as a foreigner SBIC had done. So I'm going to pause the video and wave that to happen. Okay. So I've let that sit for about half an hour. I'll try again. Okay. Host. Okay. So it's now pointing to go find a speaker dotnet, okay, So destroy updating your pointer record. If you're having problems avoiding the spam folder or your e-mails or just being outright rejected, which will happen with Gmail if you don't do that. Anyway, let's test out again. I had a problem anyway. Test. Okay. That's good. Refreshed as a new one, their inbox. And that's my new one, but it works me every time. But also another thing down here, this alert rule URL that's pointing to the local host address. We can update that to point to our domain names URL if you wanted that to happen. So to do that, go back into the grafana any so ADC refining, refining. And if we scroll down a little bit, we're gonna change this root URL here. It's currently commented out, but I'm just going to rewrite it completely. Root URL equals https dot ASP code.net. Enter Control X to save yes, restart or phonon, we can check its status that we haven't broken anything. That looks good. Now, if I send a test again okay. To send a check my e-mail provider narrative is view your alert rule at that address will go to the alerts page slash loading. Then that was a 5k alerts. So it's not showing up there. But anyway, I have a notification channel setup now, and instead of a mouse and the test works excellent. 47. Create Alerts for SNMP No Data: Okay, Now we'll use that alert notification channel will simulate some errors and get an alert. For that. I'll use the SNMP devices that are set up in the influx DB section. So when one of these devices stops working, or at least the SNMP daemons stops will get an alert. So before we start doing that, we should make sure that we've got Grafana version 8.3 or higher. Since 8.3, there is no unified alerting interface. So if you are using the same version of co-founder that I've been using. That was version 8 to something will upgrade profiler now to the latest version. So going to my upgrade downgrade page, we can see what version is available now, I installed version 8.238.24 so far at the beginning of the course, the latest version now is 8.3.3, so we'll install that. Now. As you see, there's a lot of versions going on since then, I've been recreating this course over the last month and a half. And in that time, there are many versions of Grafana that have been updated. This is what it's like working with Grafana. There are updates all the time. Okay, so I'm going to install H3.3 OSS dipole source version. There. The instructions, I've alread