Micro Services and the World of Spring Boot: From Scratch to Hatch. | Dr. Rehman Arshad | Skillshare

Playback Speed


  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Micro Services and the World of Spring Boot: From Scratch to Hatch.

teacher avatar Dr. Rehman Arshad, Senior Research Associate, PhD Doctor

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

12 Lessons (2h 36m)
    • 1. Introduction

      5:54
    • 2. MS Architecture

      15:09
    • 3. Spring initialiser

      12:37
    • 4. Databases

      14:55
    • 5. Layers of micro service

      12:39
    • 6. Models and transformer

      15:03
    • 7. REST and End Points

      15:02
    • 8. Integration Testing

      12:37
    • 9. Test coverage

      12:14
    • 10. Logging

      13:44
    • 11. Docker

      11:40
    • 12. Monitoring and Conclusion

      14:32
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

28

Students

--

Projects

About This Class

Description:

In this class, we will learn the micro-service architecture from scratch and will use popular spring boot framework to build our micro-service. Spring boot is one of the most famous micro-service frameworks, especially in e-commerce.

In order to cover databases, REST, deployment and life cycle of micro-service, this course will not end up like many others that only deal with controllers and service layers. This course is all you need to learn how to: use and configure postgres database with your service, how to use maven to generate executable micro-service jar, how to use docker to dockerise your micro-service, how to use docker-compose to spin up your service and database together, how to use monitoring tools like prometheus and grafana to monitor your micro-service and how to do integration testing in spring boot via web test client and other important testing frameworks. Overall, this course is all you need to understand and play with spring boot and all associated chunks to have your powerful tech stack of micro-services.

Course Outline:

1. Introduction
2. MS Architecture and Spring VS Spring Boot
3. Spring initialiser and basic units of semantics
4. Connecting a postgres database with your micro service
5. Layers of a Spring boot app and annotations
6. Model and Transformer
7. REST and Open API Specification (OAS)
8. Testing
9. Code Coverage
10. Logging your micro service
11. Dockerise your micro service
12. Monitoring via Prometheus and Grafana

Note: This course assumes that you have basic to intermediate knowledge of java.

Meet Your Teacher

Teacher Profile Image

Dr. Rehman Arshad

Senior Research Associate, PhD Doctor

Teacher

Hello, I'm Rehman, also formally known as Dr. Khan. I am a Research Associate, PhD Doctor and a tech geek. I hold PhD in Software Systems, MS in advance Software Engineering and love research and teaching. Feel free to say hello on skillshare or linkedin.

See full profile

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Introduction: Hello and welcome to the introduction for the course microservices and the world of Spring Boot from scratch to him, I am Dr. Romana, should I have a PhD in software systems and mess in advanced software engineering. And I had been working in industry and academia for more than seven years. So let's get right into it. So things that we will not do in this course, because I have seen that many of the courses are doing these things and I don't believe there should. We do not refer to any external links or external learning. Everything that we need to do with respect to a microservice we tried to do in the same course, rather than navigating you to other courses or to other YouTube tutorials. We do not follow the slight and form it. And we will not only doctors particularly about specific concepts, we will implement, eat and everything. And we tried to cover everything from industrial perspective, rather just talking about them from theoretical aspect. So we will absolutely implement everything and it's an implementation course rather than just a theoretical one. Things that we would like to code, code and code. Because in my opinion, the best way of learning goals by doing it, we start constructing a microservice from scratch. Everything, testing, monitoring, decolonization, deployment, locking, everything that we will do, we will choose industrial frameworks that actual companies use in everyday microservices processing. And by the end of this course, hopefully you will know them very well. Right? So let's get into course outline. For in the first lecture, we will discuss the microservice architecture and its pros and cons versus monolithic systems. And what is the difference between spring and Spring Boot? Because we heard these terms a lot and we will see which one is better than y and y one is different from another. Then we will discuss what is Spring initializer. We can start constructing our first micro-service and how we can use our Spring initializer. In the next talk, we will connect over spring initializers based microservices with the Postgres database. We will not use any external database, but we will use Docker to spin up a container and connect or Postgres when they were running service. After the database stock, we will discuss what are the layers, what are the different semantic parts of a springboard microservice? What are different denotations? What is the meaning and how we should use them? For example, what is a, what is a container, what is a service, and what is not a word annotation. We will look into all these different chunks of spring to see how actually the magic works. In the backend. After that, we will discuss what is a model and a transformer. Why we need transformer in industry, why it is important and so many macros or which do not have transformers. What are the disadvantages? After discussing modeling and transformers, we will discuss rest endpoints and we will implement different rest endpoints to show how we can implement over rest controller. So over service can start talking with the prouder. After discussing rest, we will move to testing. We will discuss two very important libraries for testing in Spring Boot. One is a rest template and the other one is Web test gland. So the emphasis of testing will be on integration test rather than the basic unit testing. And of course, we will use j Koko to measure over port coverage. We will see how to configure it, how to use it, and how actually industry's use tools like J Koko to keep an eye on the court coverage. After that, we will discuss another very important topic which is called logging. We will see what is logging, how to do it properly, why we need it, and how we can do it by decoupling it from the code. Of course, along with lobbying, we will look into few exception handling as well. After logging, we will see how good our colorize your microservice to make it consistent at runable in any isolated environment doesn't matter if your VM is CentOs, is Red Hat or some other flavor. When we dock what I did for deployment, you have the guarantee and satisfaction that it will work everywhere. So now we are moving towards deployment from development. After depolarizing your application, we will discuss few very important industrial tools that we can use to monitor over microservice. In this course, we will cover Prometheus, edit data scraper, and verify manage their data visualizer. So your managers and your analyst can keep an eye on the performance of your microservice. So overall, we are starting from the very base of construction of a microservice, but we will not leave it Desktop after completing it, we will associate all these parts of deployment and in the steel stack like decolonization and monitoring and logging to see how services are actually constructed in the real companies and why they are constructed in this way, and why it is important to learn the best practices. Welcome aboard, and I hope you will enjoy that. And let's get dive into it. 2. MS Architecture: Hello everyone and welcome to go over. First talk on Spring Boot and microservices architecture. I'm reminded, shoot, and let's dive straight into it. So in this video, we're going to look into different types of architectures that we have. And then we're going to discuss spring versus Spring Boot. What is the difference between them? Because you may heard about these terms all the time. So we will see how they are different and which one we should use in 2021. So starting from the architecture, all the architecture in the late 80s and the 90s is also known as spaghetti oriented. It means they were big monolithic chunks of system. That what did indent in a way that most of the code is common, but still there was no way in the early days to reuse the code for more than one small monolithic system in a big overall system world. Then the most famous one in the next decade and many companies are still using that is called moralistic system. And if we join some software company, you missed, it'll see that a lot of legacy code is still in monolithic form. So in monolith architecture, you have a one huge codebase. And because there is only one huge code base, there are some bad effects or there are some cones that monolithic architecture causes. For example, with time, the codebase is keep on increasing. And there will come a time and it will be very difficult to maintaining it because of the overhead. Then if there are a lot of services that are working under one monolithic system, and if one of them produces an issue, then it can affect the whole system. So in other words, one functionality can damage or triple dot all the feature set of a big system. Some of the benefits of monolithic systems where you don't need a lot of manpower in order to maintain that. Because usually a set of developers who are familiar with the system are more than enough to maintain it. But the biggest lack or biggest con on one alert systems is the maintenance, the overhead of performance with the parsing time. And eventually there will come a point, then that system will not able to be upgraded to the modern versions of the software does anymore. And that is why it made and very well for a jacket or for few years. But then always comes a point when it will be very difficult to maintain the system and it will be more cost-effective to write a new one rather than telling a decade old monolith system. In nowadays, we are living in the world of what we call microservices architecture. The biggest benefit of ravioli oriented on microservice architecture is that if one service goes down, then the other services will keep working. In other words, if you plan or architect your tech stack in a way. Services will be dependent on eater that as less as possible. And there is a sort of redirection between them, then there is a better chance that most of the features of your system will keep on working even if Simpson point of consumption will be done. Microservices architecture has a drawback as well as the services will keep on growing within a company. They will come up time, then you will need more and more resources to maintain them. Because in my experience, one developer is usually deals with ten to 20 micro services at a time. And believe me, that's a lot. And so in a big company or in a medium company, when the microservices will keep on increasing, then you need to hire more and more people that can maintain them. But especially in today's world where we are going towards online resources, especially in the world of e-commerce. They bring a lot of advantages. And the biggest one is you do need to know about all the microservices spin up in a company. You only need to because learned about a set of microservices in the area where you're dealing with. Whereas in a monolith system, as the systems keep on increasing, it will be more and more difficult for every developer to understand what is going on in a huge code base. So costly maintenance. But at least you are sure that they will dot, goes out of the scope like a monolith system after a few years. And if one service goes down, then rest of the system, we'll probably keep on working if you architect them in a right manner. And that is a very big, crucial advantage, especially in the world of e-commerce. Now, we heard spring and Spring Boot all the time, right? So before we move towards implementing over microservice or before we move to worse over code editor. Let's first try to understand what is spring then what is Spring Boot? So spring was originally developed as a dependency injection framework. So what is dependency injection? If you have intermediate knowledge of Java or some other object oriented language, then dependency objection, is a concept which says that the dependency of an entity a should be available before the execution on that entity. For example, if you have two classes, a and B, and E is calling a function of class B. Then you usually declared the object of B in a and then call the function of b. It means that you are providing the dependency in the form of an object. In a huge code base. If we keep on instantiating these objects. And most of the time they can be in very inefficient way. Then it can increase the heap space because most developers may end up having memory out of exception issues. And also you need, you can face churn issues in which your garbage collector is not collecting the objects at the pace at which you are creating them. So dependency injection in spring. Was the idea which provided an automated way of providing the objects or the dependencies that are needed to trigger specific features of a spinning microservice. So in spring, there are a few important terms that are still valid in Spring Boot. And you need to understand what these terms means. First of all, whenever someone uses the term beam, a beam can be of type component, service, repository, et cetera, et cetera. It means that whenever you will see a class with annotation, component service, or component or repository, you will see that it's a bean. And while you will say that, because beans are those classes which are managed by what we call application context in spring. So consider application context like a container. Whatever class you will registered as a bean by using one of these annotation service repository and component. Those classes will not be managed by you the year span and the execution and the year DI dependency injection will be managed by the Spring container, which is client application contexts. So you don't need to worry about initializing and creating objects. And we will look into all these annotations in much more detail. So spring was originally proposed as a DI framework. And with passage of time, more and more stuff was built on top of that. So modern spring have a JDBC component for dealing with databases. It has transactional part, it has WebSockets, it has server. Let's By the way, there is a difference between a term server and the server. Let apache is a server, not a servlet. Tomcat is a servlet, not a server server. That is something that can hinder dynamic request like Dawn cat, or the other one I can, I think of is in genetics. We had as the server, for example, Apache. And there are many others deserve the static content. They cannot handle their dynamic request, so they did a difference. Anyway. So application context manages all the beans. The beans can be of different types. And in the spring, in the old days, you need to register all these beans in an XML file. Xml is not very likable in 2021 anymore. And believe me, it was very hard to define the dependency of these beans. And we usually ended up with a lot of errors. In simple terms, Spring was very difficult in a way that you need to wire these dependencies manually yourself. This is not the case anymore. So if we look into spring was his Spring Boot. In spring, you need to use all the ingredients to compose something yourself. Whereas Spring Boot is giving you everything. And all you need to do is to use annotations in your Java microservice and it will do less of a job for you. So normal XML, no more manual control on the beans. No more manual tweaking. And so you will end up avoiding all those horrible adders that used to be dead in spring days. In industry maybe you will still see a lot of spring core basis. And that is not because spring is preferred over Spring Boot. It is because many companies still have legacy core basis that in my opinion, should be upgraded to Spring Boot now. But still it is not very uncommon, but more than 80 percent of the time, no, no microservices in Java or that you will see, especially in e-commerce. They will be in Spring Boot. So the question is, when we now know the difference between spring and spring boots. In Spring Boot, we don't need to manage the DEA and the beans resources manually. Springboard, we'll do it for you. But what is Spring Boot? Many people say Spring Boot is a framework. Just remember, Spring Boot is not a framework, right? It's actually a library. And this is a very common confusion in industry. It's not a frame, but always remember, framework is some entity that calls your code. And liability is an entity to which you call your court. So there is a difference. Okay. So the biggest difference between spring and spring Boot is it is also known as a spring enhancer. And it has an embedded Tomcat, which is a serverless. It means that when you run the spring application by default, it will run on Tomcat, that default port 8080. And we will look into that in the next lecture when we actually move towards over according bits. You don't need to handle anything manually. You don't need to use XML. So there was no Art quantification in spring, but in springboard, everything is automatically configured for you. All you need to do is to place the specific annotation and the rest of the thing will be handled for you. So that is why Spring Boot is such a relief as compared to spring. And they are the biggest benefit with springboard is something that we called starters. So we have optional starters. So what are starters? We will look into code as well, but starters are actually those dependencies that you need. They are wired to a specific version of the spring boot. So you don't need to be worried about the version management back in the day. And under horrific thing to consider is many different dependancies. Different versions did not work with each other. Now with Spring Boot starters, all we have to do is to define which version of Spring Boot we want to use. And then when we define dependencies, whatever we need, for example, we need Postgres than we import a dependency of Postgres. You may need some liability for monitoring. You may need some library for code coverage for all these purposes, there are four, most of them like for testing for web. We have Spring Boot starters. We can just import them in one or two lines in our Build file, whatever we use, Maven or Gradle, you had there better options at the moment. By the way, we will go with Maven for this course and we will explain how it works. Whatever option you will go with it. You can import or declare dependencies in your build file. You don't need to define the versions of these and the versions that will be automatically configured based on what we're, you know, Spring Boot you are using. You don't need to declare any bins manually. There is no XML file, there is no such configuration file. All you need to do is to use annotations at the class level or at the method levels. Different annotations means different things that we will explore and all of the configuration will be done for you automatically. It's like magic. You don't need to be worried about how these things are wired together. For now. Just keep in mind, Spring Boot is a liability if the enhancer built on top of spring, it handles auto configuration. You can define a class as a beam or beam can be of type, components, service, or repository. We will see what these annotations mean. And once you to just turn a class as a bean, that dependency injection and rest of the stuff will be handled by a container in Spring Boot what is called application context. So we don't need to be worried about any configuration. Right? So that is all for this talk. 3. Spring initialiser: Hello everyone and welcome to the second talk on microservices and spring sessions. So in this talk, we will discuss how we can start creating a Spring Boot microservice, but isn't Spring initializer. And what are some of the basic bit step we need to understand once the product has been created. So you don't need to create a Spring Boot Project from your terminal anymore. So for example, the spring Association have spring initializer for you. If you go to Spring initializer, you will see that they have online portal where you can choose what kind of project you want. So for example, I want to work with Java and I select Java. And this project, I want to use Maven for those of you who know Java, but don't know about the build tools. The meme in on Gradle are the two most important and modern build tools. There used to be another one called end. But nowadays, 62% of the spring products are actually working with me and asked of them with griddle. So if some of you already familiar with griddle, other rather maven, then please go for it for this course. I'm using Maven. You do need to have a very comprehensive knowledge of Maimon for this course and for this series, we just need few commands that we will keep on exploring. And we will see that how the women works with spring and what different entities mean. Next is we need to select a Spring Boot version. You remember in the previous talk, we said that based on which Spring Boot version you select, we can have what we call starter projects. We don't need to define The Virgin of starter projects. They will be configured based on your spinning would work in. The group is something that you want to call your package in your Java code. For example, I can say com dot skillshare dot spring demo. And our defect will be the name of your project. And the convention is to start with capital letters. And I can see a spring demo. And you can answer our description if you want. For now we are going with a jar packaging. It means that if we want this microservice has an executable. We want a JAR file, not the wildfire. And because we are not dealing with the front end now, you can go for waterfowl if you want to have a microservice that As the back, front and delete as well. But for now, just keep it simple. We want a simple microservice with Java and even you can call your group and artifact whatever you want. But the interesting bit is here on the right side. We can add dependencies, but we need and that's where the starter projects make it a breeze. If I search for VAP, you will see, you will see this option spring VAP. And it says, if you add this dependency, by default, your microservice will have Apache Tomcat. And this is the only dependency that you need to build a web service. So it means that you only need to import a starter web. And after that, all of the repo Curry and the bits that you need to spin up a web service will automatically will be added in the Maven file because we are using Maven. For example, I can also say I1 Postgres. And it shows me the Postgres driver because Postgres database is the one we're going to use. I select Postgres. And similarly, I can search for testing. And by default, we do not need any one of these for now because we want to start simple. And by default it will come up with the starter test anyway. So in other words, if you wanted to spin up a simple microservice that is connected with the Postgres database. Then you can select Spring Web and Postgres, and that's all you need. And you can generate that. And you see it will give you a zip file. Once you open this zip file. And we will see that what's going on inside the zip and inside this spring demo. Now you will see that we have a POM file. This is the default Maven file in which all the dependencies will be added. If you have used cradle, you will have a build-up Gradle file. And then we have a source. Now inside the source, it generates two packages, one for mean and one for test. Let's go into main for now. Inside the main we have again two directories, Java and resources. Let's go into Java. And this is over a group that we selected, Skillshare spring demo, and this is the artifact name. And you know, you can see that it by default, it only gives you one default class. So what you can do is if you're using IntelliJ like me, you can open a project. You can go till this POM file. You can select this POM file, and then you can select Open As a project that wanted US. Then it will make sure to open it as a Maven project. And it will import and set everything that you need with a Maven project when you want to import that. So I already imported that project in my IDE. So let's go to IDE. So now this is the situation here. This is our plum dot xml. Don't worry about these Docker files yet. We will add them later. You will see what they're supposed to do. They're not doing anything. For now. We have this simple microservice, and again, we have a source folder, we have a main and test In test folder. We supposed to write our test cases in main. We supposed to write over code. This was over group ID, and this is over artifact ID that we gave. And this is the default class that it gives to us. And you see that by default, it gives you a default class with the annotation, which is called Spring Boot application. So you can only have one Launcher class in a Spring Boot Project. And the Launcher class on always have this annotation which is called Spring Boot application. A lot more complex things are going on behind this annotation, which is called Spring Boot. For example, it initializes the application context. It looks for the beans into registers, them. You don't need to worry about anything like that because we are using springboard. All you need to do is to use this class to start running your project. And by default now you have a template for your Spring Boot microservice, and we can build on top of that. I have also to just straight up being of a type rest template. And we'll see why we need that when we go into our lecture number four. But for now, all you need to do is that we have this launcher gloss and nothing else. We have test and main folder. The important thing that we need to see here is what do we have in over palm dot xml, because that is where all the dependencies live, right? So if I go into my newly generated template and if I open it in my text editor, now you will see that at the top of this file we have a patent. It means that this Maven project should be of type Spring Boot starter. It means that now if I have starter projects, I do not need to give any version for them. I'm I have used Spring Boot version 2.4.5. And again, this was my group ID artifact ID, and this is the description if you want. I gave, I'm using Java 11 because I take Java there by default it gives me 11 the important bit or these starter project. So you see by default we have selected Postgres and we have to let it start to wrap. So it gives me these two dependencies. And automatically I did not add anything, but it gives me start or test as well. So I did not define any version here. I just defined the name of it. And then it gives me the Maven plugin because I'm using Maven for building a microservice. And that is why I need Maven plugin to type different commands that can execute spring processes. Now you will see all we have in this POM file are started as a starter web startups and the Postgres dependency, and this is all I need for building a basic web server. And that is why it is so useful and easy to maintain. Because you don't need to worry about the Version Management. All you did was entered some fields. It gives you the whole template and you are good to go to write your first micro-service and for your visibility and also in the GitHub account, I have actually added all the dependencies and the starters that we're going to need in the upcoming talks. And we will not discuss that in this lecture. We will keep discussing them when we reach those topics. So you will see that what exempting each entity is doing in this poem foil. But this POM file is nothing but consist of dependencies and plug-ins that we use for different things. For example, dependencies can be starter web, that gives you Tom cat and a web service. Dependency again be starter test that gives you a unit test and the different testing frameworks. It can be a MySQL if you are going to use that, it can be. So there's a huge world of dependencies that is out there. But in order to start, all you need is started test starter web and the Postgres, and we will build on top of that. So now in the next stock, now we know that we can do the data project from Spring initializer. And we know that what is going on in the poem. Now we will see in the next lecture how we can connect a database with over microservice because we already have a Postgres dependency. It means we can connect to a Postgres database with our microservice. And we will verify that once we are sure that the microbes service is connected with the database, then we will build on top of that to add more features and rest calls to see how they work and interact with the DP. But the next time will be no ensure that the database is there and it is working fine with our application. Now, just, just, just to make sure that once you open this product in your IDE, you can do something which is called Maven install. Maven install actually go through the whole POM file. It sees what dependencies are there. And if you're doing it for the first name and if these dependencies are not available in your local Maven repository, then it will install them. So whenever you open a new project, it's a good idea to do Maven install. Also, we have a command called maven clean. When they do maven clean, you will see that it will clear this target folder and it will not be here anymore because no, and I do Maven install. It will actually compile the code, get the dependencies, packaged them, and then generate the target folder, which is to execute executable form of my Java code. And many of you who have used Java, they know that what you have in the executable code, you have dot class files and all those things. But now because we are using Maven, so we have simple tomorrow's that we're going to use to do all this. So for now, all you need to do, all you need to know is you can do Maven installed, you can do maven clean, and that is how you will build. We will look into more Maven commands that are related to the spring Boot, how to run the Spring Boot application. We can also run it from this IDE, but we will prefer to run it from over terminale. So we prefer to do everything from the terminal. This is a template that we have right now and we will build on top of that in the upcoming lectures. So that is all for the day. Go to the spring initializer, tried to generate a template with all the fields that I have used and then try to open it in IDE, tried to do me when install and see if we can get all the dependencies. And you can also do to the GitHub repository where all the project has already been set up for you. So you can also play with that if you want to. I'll see you in the next talk. And the next talk will be about database postgres, how to connect it with the new microservice, and how to verify that you have connected that. Thank you very much and I'll see you guys in the next talk. 4. Databases: Hello guys and welcome to the third talk on over microservices and spring CDs. So in the last step, we discussed with his Spring initializer and then we can generate a template for your spring microservice. And we also explored some of the chunks of the POM file which has some starters and the Spring Boot parent. In this lecture, we want to connect over microservice, which is practically empty microservice for now with the database. So before we proceed towards a writing something, some endpoints or some real functionality into over service. We want to make sure that there is a database that is already running in the back-end. And we can use that database and it will be connected automatically once we will spin up of a microservice. So the database that we're going to use is called Postgres, because in the last lecture we imported the Postgres dependency in our POM file. Now, from now on, it will be better if you can download Docker for Mac. And also please don't forget that you need to download Maven for Mac as well. Maven you can download from terminal by searching for a command to install it. And Docker for Mac is just a DNG format or for some other operating system that you can just install and it will start running because the easiest way to spin them for database for a micro service for local development is by using Docker. And I'm not going into the detail of what Docker is and how it works. For now, I'm only using Docker, so just try to use it for now, rather try to tweak it. And then we'll discuss more about Docker docker lecture. So first fall, what you need to do at the root of the project, you need to create a new file, which is called a Docker DashCon post.html. Now this file is used what we call Docker Compose and whatever services you defined in this file, they will be spin up in your host machine as, as an isolated environment. So for example, and the Docker Hub has images, or these docker-compose is for a lot of things like for Postgres, for SQL, for different versions of Linux. So you can spin an OS within us by just using one command, we just call docker-compose up. This file is also available on GitHub, so you can just copy and paste it in the root of your project. So we have Services tab, I'm calling my service DB. I'm saying there the image I want to use is Postgres. So NSA image, it means that in the Docker Hub, there is some flavor of Linux that has a postgres installed in it. I wonder, I want to run it on 5432. And because this is the default port of Postgres, by the way, the port on the right-hand side is the port with which container connects with the external world. And the board on the left-hand side is the board in which one container contact another container. We do not have any little container. So we used a board on the right-hand side. By default, the environment variables have a username, Postgres default password is password. I want to call my database a demo task. And then I'm enabling the Postgres host of authentication so we don't need to type over password. The important bit is under the volume stamp. So the first line is saying on the right side of the colon Docker entry point. So Docker entry point defines the pot in which all the files will be executed. Once you spend up this container, it means that when I do docker-compose up whatever I have in my source, main sources and SQL folder, it will be executed. So in my resources, I have SQL, but there is nothing in there. So I need something in my SQL folder. So it can be executed. For example, it can be some SQL to create a database or anything that we want that SQL script to do. So first of all, we need to fill something in that SQL folder. So let's create a new file. And let's call it Schema dot SQL. Ok, and let's comment the first line because I think my doc or already have this database. So I'm calling it demo task cosine by using docker compose, you should not comment it out. So remove 4s, two dashes. And I'm saying connect to my demo task. And I need some table, of course, so I'm saying create table, let's say my tasks. And inside this table, I like to have task ID. I want to have a UUID type. You can read about UUID. It's usually we used in the industry for a primary key rather than a spring or some less secure that option. Then I want to have a field called owner and let's call it who are less, give it 50. And we need this field. So I am saying it should not be null. Now let's call another field description. Just call it varchar. Again. Let's give it a length of 50. And again, this should not be null. Now in order to demonstrate that we have our database actually works of crispiness. Next Door, damn, dummy entry. Let's see, insert into my tasks. Task ID, description and values. Now, one attending UUID I can pay off is for 66. The 27. By the way, all these UUIDs or not and are not valid, so you cannot just type anything. I validated the one I'm typing here. And then we can have 941, c, d, e, f, c 39. So please read about what is a UID in Java, it's very useful, especially in microservice if you want to use an appropriate notation. Okay, and let's call it spring lecture. Okay, So we're doing a dummy entry. So now it means that we have an SQL file that this Docker Compose is looking for. And then this second line means that Vered of persistent directory of database should be. So this thing on the right is actually the default docker bought for the persistent data. I am saying that map this to my current directory Postgres data. So then I'm going to run that an OS class folder will appear in the root of my project that has a link to the actual data that I'm saving and using for my application for. And this is a very perfect setup for local development. So I'm already at the root of my project. You can see that. You can see that I have Docker Compose here. So now if we wanted to spin up my Postgres database command is Docker Compose. And once I do that, it will try to spin up my database. And you can also look on the terminal to see what is going on. So it is saying that now it is ready to accept connections. Can you see that the postcard US data folder know also appears here? Now? So this is all we need to spin up an isolated database and you can connect any application that you're running in your computer with that running database in order to see that when the database is there or not and whether it's validated or not. You can use any database visualization tool. I'm using a tool called Beaver. So when they go into Beaver, I'm saying you can download it for free. The Community Edition is free by the way. My database type is Postgres. I need to install the driver for the first time. The host is local host, that is true. Database is demo, task. Password is password, the password is correct, which is on the right-hand side. I will test up an action and it says that, all good. So I'll finish that. Now this is my demo task. You can see that I have this demo task for database here now. If I go into tables now this is over table, my tasks that actually we used in over Schema dot SQL. By the way, you can have more SQL files in this directory as well. And on starting the container it will execute all of them. But keep in mind that you cannot assure the order in which they will be executed. So usually we use this kind of Def setup for development or food production because we cannot assure the order of this execution. They don't advance tools like flywheel for that, but that is not a scope of this series. So just for now, keep contented with von file in case it will matter in your test cases later on, right? So if I go back to Beaver my tasks. So you will see that in the data, you will see we did one dummy entry. And you can see that over task is here. Good. So it means over database is up and running fine. By the way, in your terminal, you can also do docker container ls because this database is running as a container and you will see that it is telling you that the postgres container is running on the port 5, 4, 3, 2 for the world and 5, 4, 3, 2 for internal containers. If we wanted to interact more containers with it. And now there are few things that you need to understand. Let's say that I do. I stop it from by Control C and we can do docker-compose down now. So once I do docker compose, don't I again do docker container ls. It doesn't show me anything. So in order to clear the discourses, in order to keep everything neat and clean, it's important to do close down because if you don't do that next time you will not able to compose up, right? If let's say you forgot to do that. And next time we cannot compose up. You can again do docker container ls. And you can do docker, container remove and you can pop it. This ID that will appear here. Now, I do not have it is running container, but this is how you can remove it forcefully if you forgot to do Docker, docker compose down. Now let's say that I did compose down and I added few changes into my schema file. But this thing will not be executed next time because the, because the database thinks that it is already initialized this, so it will not write it, initializes it again. So if you wanted to initialize it again with a new script, with the new changes, then you need to delete this Postgres data folder from here. And you can delete that folder. Then after that, you can just do compose up again. Now because there is no record of that database in our persistent storage, it will again initialize whatever you have in the SQL folder. And then it will go through this clip to execute it again. So if you make a change in your schema, but do not delete the Postgres. There is a higher probability that your Postgres will not be reinitialized and you still have the old instance dead. By the way, you can verify by looking into some to like beaver, which is free to use, right? So now we have a running database. Now how we can connect this database to over application. So I'm using the default spring way. Now you know we have resources. This file comes with this spring template. It is the default file in which we keep all these configurations application dot properties. So in this file I'm saying that my server port 8080, so by default application should run on 8080, and these are the lines that are defining which database it should connect with. So I'm saying it is running on a local host on this board that I defined in my docker-compose on the right side. And then I'm saying this is the name of the DB that I defined in here, which is called demo task. So it means when we run that application now in the next lecture, this application will try to look at port 54321, look for the database demo task. If that that database will be there, it will connect with that. If that database is not spinning up, it's not available, then you'd application will be terminated because you're just trying to connect with some running process that is not there. So make sure from now on in your local Def setup, you will do docker-compose up. You can leave it for days and you can just keep running your application and it will always be connected straight away. So you don't need to decompose up again and again, set your schema composer and forget about it. So this is a perfect local setup for over database. Now we have our database. And in the next lecture, we will look how we can start building the layers of our microservice and connect with that database that we already did. And we'll keep working on our code now to customize over microservice. And that is all for this talk. Thank you very much and see you guys in the next one. 5. Layers of micro service: Hello everyone, welcome to the fourth lecture of over microservices and Spring Boot. And in this lecture, we will try to structure what application from the very basics. And we will see what kind of different layers we need and why and how we can use them. So in the previous lecture, we have spin up over Postgres database. And my Docker is still running, so I can leave it as it is. And now I can come to structure my application. First of all, in my basic packed structure, I can create more packages, which is a good practice because it wouldn't help us separate it differently as a forward application. So you will see I have created controller, event, model repository service and transformer packages inside Debian package. For this lecture, we only need to consider controller, service and repository. We don't need to be worried about model and transform on an event because we will tackle those in the next talk. First of all, the story begins from a controller known less create a class college task controller. And this class controller is the uppermost layer in your microservice. And this class will have annotation rest controller. It means that this class has the responsibility to serve the content that you can view on the web. If there is an order consumer are no other frontend, they're supposed to consume it for you. Service layer lies between the repository and the controller. Let's call it. Tasks are risks. And this class will have an annotation of service. And then of course, repository. We'll have an annotation called repository. So now we have three-tiered application, which is like the standard way of doing it in Spring Boot. The uppermost layer it is controller. Then there is a service in between and that there is a report at the lowest level, only repository should be able to talk with the database. No other layer should access the DB directly. And now the question is how we can connect these layers, right? So there is an annotation in Spring Boot, which is called auto wired. So the rule is the lower layer will always be hardwired to the layer above. So for example, repository is the lowest clear. Buddhist normally are below it. So we have nothing to auto white to it. If we go to over service. What we can do is we can our two wild over repository. So now once you aren't aware disagreeable task repository object, now you have given a dependency injection. Which is auto configured by spring. You don't need to initialize the task repository object with the keyword new in Java. Once you just aren't a wired it, you can use it as it is. We just call any method of repository. We don't have any, but you can call any method without instantiating it and that's it. Similarly, in the past controller, the lower layer is the ATSC service. So we learn to avoid it here. So now the overall equation is over controller will send all to some service man method. The service will send all to some repository method, repository. We'll do some query with respect to over Postgres database, that results will be returned. And those are deserts will be returned from service to controller. Controller should never deal directly with repository. Many people argue that most of the time the service is just a place holder. Data is not doing anything but just forwarding a call from controller to repo. So why do we need the service layer? It is always a good practice to have the service layer for two reasons. Number 1, in complex e-commerce services, it is highly unlikely that service can deal with only one database method to the tree, whatever is needed. So it is not uncommon that service may need to call multiple deposited the methods or there can be multiple repositories that can send the subpart of the desert. And the desert can be aggregated in the service. So all such aggregation and filtering should not happen in the repository layer. We should not pollute it, repolarize should have nothing but just introduction with the DB to retrieve whatever query is instructed. Second isn't we need service? Is for cashing purposes. Most of the time, if you are on big e-commerce websites, if you filter some desserts, then those desserts will be cached in the service layer. So if similar query will be executed in anytime soon, rather than wasting more time and computation to compute everything in the DB, del, those cached desert from the service will be used to serve you the content. That is why I said this is a very important and you should always have a service between repo and controller. Now, we discussed in our introductory talk that we have annotations in Spring Boot, so we don't have to configure anything over itself. And this is the first example we have used with annotations. So for example, we use annotation rest controller. It means that this class opposed to top with the consumer of the service, we have annotation auto wired. It means by using this task that we can call any method of service, we have annotation service, which is a type of bean. So all these annotation service, auto, service, repository. Or there is a third one which is called component. So if a call is not a service or not a repository, but a class is supposed to be a bean because you were doing something in that glass that you want it to be handled automatically by the application. Then you will use the annotation component. So all these notations, repository, service and component, they haven't made these classes are being no. So you do not need to instantiate or do anything manually when the object of these classes ever, because Spring Boot will take care of it. So the flow is by default, you will write a method in the controller that's supposed to work when you hit some URL of the web browser. And when you hit that URL controllers and a call to a service method, Service had no call to a repo method Depot fetch that is LD for you. Give it to service, service, give it to controller, and then the browser will serve the results. So we are not writing any rest endpoints for this lecture that will be in the rest stock. But what we can do is we can demonstrate a simple case how a controller works. So in rest, we will see that there are three main types of requests. Get post and put. Put for creating or editing, pause for creating some new resource and get forgetting that isn't. For now, just use over get mapping. You said GET request because we just want on-demand spread, how controller works. And it needs some URL. Dad should be hit and then this method will be executed. So let's say I'm saying that my method is public string. Get greedy. And I'm just returning helloworld. So it means that whenever I will hit slash hello, if my service is running fine, then this URL should give me helloworld. And if I go into my config file, which is in Spring application dot properties in the sources. That default port I'm running my application on is 8080. So if I will hit localhost 8080 slash hello, I should see Hello World. And then we will write proper methods and controller. Then what will happen when I will hit some, you are like slash hello. I will see the result of the data that my service is bringing all the way from the database to the browser. So now over databases already running, let's go to over lunch or class and let's run it. So when we run it, you will see that this notation appears on the screen and you are not seeing any error. It because over databases already running. If you look into these configuration files, if we have wired this database configuration here, but have not actually spin up the postgres container, then your application will probably exit with chord one or 0 because it is looking for a connection to make, but it cannot find any. But it is all fine now because our container is running and the database connection has been made. So if I go to my browser, now it's localhost 8080 slash hello. And you will see, you will see Hello World there. So let's get back to what application. Let's get back to over controller. Hello world to the spring class. Right now we have made a change. Let's read on it. You can also stop none. It didn't essentially doing the same stuff. Now it's getting loading again. Get back there. Still loading. And now you will see that helloworld to the spring serious class. So it means that this series of structure, controller, repository, and service, it is a must. For your Spring Boot microservice, you can buy a pasta service, but don't do that for the reasons we've discussed. And that is how long we will keep structuring over microservice. So we have some data in our database and we will get that data map into some appropriate in a patient that we want to represent on the web browser. And then over controller will call respective methods to create or to get different data that we need. And then we will have a fully working microservice. Next lecture, we will discuss what these transformer and event and model packages supposed to do. And after that month we get the idea of what all these packages supposed to do. Then we will start writing of a rest service. So we can hit different endpoints in our browser and we can tweak over data. After that, we will write some test cases, do code coverage, C logging, discuss how Docker works in the broader context. Add some monitoring in our microservice. And then you will have a fully working microservice with everything that is needed with respect to testing, monitoring, decolonization isolation. So that is all for this lecture. Now, you can get the code from GitHub repository or you can just use your spring initializer to generate an application. Create these packages. Just write a simple core leg this auto wired every layer to the layer above that, and then try to run your application if your database is already running. Thank you very much and I will see you guys in the next talk. 6. Models and transformer: Hello everyone and welcome to the top number 5 on over microservices and spring series. So in the previous talk, we looked into three-tier application to start off Spring Boot microservice. So we had Repositories service controller and we ought to avoid every year to the layer above that. So, but out of these packages, we still have some unexplored options. We have a model package, we have even package, and we have transformer backticks. So in today's talk, we'll look into these options and then we'll see why it is important to have these different packages and what exact purpose are they serving? So first of all, when we spin up over database, you remember that we initialized our schema dot SQL from Docker. So over database has one table that has a task ID or an urn and description fields. If we have to fetch something from the database and usually present that information to over browser or to some other microservice that can be that consumer of this micro-services. Because microservices can be consumer or producer for each other. That is the whole idea about this architecture. Then we need some POJO that can serve and that can translate this database to the model object that we want to communicate with the other services. So first thing that we will do, we will create a new class in our model. And we call it tasks, because in this series we are dealing with a task management system. Now it's a simple Java applause. No annotation is needed. All we need are the fields that we defined in our table. So this class or this model will represent over task table from over demo task databases. So overdose KID was UUID. And over we have description. And we have. So these are the only two fields that we have for that table, right? Let's generate a parametric constructor. And we also going to need getters. We don't need setters. Setters are not good for encapsulation. They are actually called smelled as well. So we also need equals and hashCode. Yes. Right? So, so far so good. Everything is being hard to generate it for us. So now this is over modal class, right? And let's say if my database will have n number of tables, then in most common cases I will have n number of modal classes. One class can map to multiple table as well, but usually that is how, that is how it is done in microservices. Now, this model, or whatever classes or whatever produce I will define in my model. The air, my internal models. Internal model means that only this microservice should concern with these. But in microservice architecture, we have other services that may want to retrieve this data by hitting the endpoint. So over service can be a producer for many consumers. Over service can be a consumer that can hit some of those service endpoint and ask for the data that it needs. It can be ending the year old. But it is not a good practice to expose your internal model. And there is a reason behind that. First of all, in most cases, you only give a consumer service what it means if my internal model have some fields that have some private data, or data is not needed to be consumed by the external services, should not send that data along with the data which is needed. So the key here is only sent to what the consumer needs. Secondly, the consumption or the interaction between services. It all happens in JSON, and this is a POJO. So we should have some intermediate layer between the consumption in this model that provides some sort of a sanity check that whether my internal model has been transformed successfully into my JSON and the vice versa or not. So at the process of converting your internal model to the JSON class that can interact with other services is called marshalling. And the other way round is called marshalling. Now in modern tech stack, if you don't have marshaling and demarshaling, glosses x explicitly, the spring Bootstrap will do it for you. So you may not find a reason why to do it explicitly, but there are cases that can ruin this. For example, maybe your internal model is handling a date and time for me in a different way than the other consumer services. So it means that other services require you to give them some attribute in a very defined structured way, which is not the way how you structure your internal model. Now you can have multiple consumers and you cannot keep on changing your internal model based on the request. Because when you change your model, it can break your test cases. It can break your calls, it can break all part of your microservice. So you don't suppose to change your internal model. It will be as it is. And you can compensate the consumers at the JSON level in the event package. So that is what we have even package for. So in the event package, I will define a class. I will call it task JSON. Okay? And now in this task Jason, because over microservice is very basic now. So try to understand the rationale behind that rather than any prominent changes which you cannot see for now. Again, UUID, and let's call it string description. So these will be the fields that will appear on a browser. My incremental model fields should not appear on the browser. Okay? And similarly, oops, let's generate. And we're going to construct over captors. And let's generate over equals and hashCode, right? So these two glasses, they seem very similar to you. What data's the rationale behind that? Let's see that there is a service. And in that service is only interested in the task descriptions that were given in the last 24 hours. So it means you don't need to send that ask IDs and the DAS commerce. Then we can do some translation in-between these two internal and the external and model in which we can exclude the information data is not needed. Secondly, this explicit marshalling, make sure that each field o of, of over POJO will be mapped to the respective field of the JSON. So if the scenario changes, if some field of internal model suppose to be mapped to some other field of my JSON model. Then I can do that easily in my transformer layer. So similarly, now you pretty much know why we need transformer. So I'll call it POS transform. And now I use the annotation component because they want to use it as a b. And the first method that I will have in this class will be public Jason. It will be called to JSON. So this method, suppose to convert the internal model to the JSON model. And we can return new tasks. Json. And we can do task, task, the task bot description. Okay. What's going on? That gives us going for, okay, there must be a typo, some here. And let's change. Okay? Right, so what will happen? This function to JSON will take the object of my internal model and give me the object of my external model. Okay? Similarly, I can have a method that can take the list of tasks JSON, and it will be called to JSON list for example, this will be used if I need to return all the tasks from my database. And now it will be tasks. And let's call it model list, Import over list. Okay, so what will happen here? We will do list-style, Jason, Jason list, new ArrayList, and go for traditional for-loop. So it should be till ordered list box size. And inside this, we want to list dot add. And what we want inside. We want new task. Jason. Modern list thought. I thought get task, list, dot get, I get on. And just kept, I thought Get description. And of course, then at the end, we need to return JSON list. So the same thing that we've done in two JSON, we are doing it for the whole list because when we retrieve all the task that will not be only want to ask multiple tasks. Now the last thing we need is an marshalling method, which is called from JSON. So now we don't have object of task but a task JSON, that's called a task. And now we can do to return the new tasks. Paas, dot, dot, dot, get on, and task dot get tossed a script. So you see now this transformer. Now this transformer will work between the internal model and the external model. Now what we will do in the next talk when we will start writing over rest endpoints. Always remember in your controller layer, you should only deal with the object of task a JSON or the event model. Whereas in the service and that we pull layer always deal the object of internal or the task model nor the task JSON model. So in my controller, whenever I need to send up or to service, first, I will change my task JSON object into task, and then pass that as a parameter. Similarly, the other way round, if something is coming from the service to the controller. First, I will transform it via off from JSON to the JSON object, and then I will give it to the controller. So task JSON should not go beyond controller and task, which is our internal model, should not come beyond service. And that is how we will keep on using modal transform or an event. And we can do it for so many classes, all the classes that we have, every internal moderate loss should have an equivalent event class. So this was even more than transformer. Now in the next lecture, we'll start building over rest API. We define endpoints and we will see how we will use this transformer to communicate between controller and the layers below. That is all for this talk. Thank you very much and I'll see you in the next one. 7. REST and End Points: Hi everyone, welcome to the fifth talk on over microservices and Spring Boot series in this stock. Now we're gonna cover over first proper endpoint and we're going to involve all three layers, controllers, service, repository. And we will see how these leaders should interact with each other and how we can implement over proper microservice. Before that, let's add a lender sub package that we have not added before. And let's call it exception. Those of you who are familiar with exceptions in Java, it is always a good practice to actually define our own exceptions. And we should do it because in this way we can use different exception classes for different layers. So what I can do, I can call it positively exception. And repository exception of course, extends exception in Java. And let's have a constructor of the repository exception. And in this constructor, let's pass a string message. Okay? And inside this constructor, just invoke the constructor of the class exception. No, Let's have another one. Which is by the way quite normal to have two constructors, one with just the message and the other one with a message. And the throwable. Throwable is a library which is the parent of exception and that exception. So I can call message and I can have another parameter that defines the variable cost. And again, I can use a constructor of the exception clause with message and call it inside that. So this is all we need. Now what we can do is indifferent core places. We can actually throw over repository exception. And let's create another one. Let's call, call it service exception. And the benefit of having two different glasses which we will throw appropriate ones have different places. So we will know straight away which layer is responsible for this exception. Now, let's say just use the same constructors in this class and just replace an m in the name of the class. And let's call super. Last pass a message. Okay, let's see. Yeah, of course, because we need to extend it from the class and then it will have seventies exception class parameters. So now we have two exception classes. We will use them again and again. And let's now get back to over three layers. Let started from over repository. So I have defined this method, get all tasks from Repository. And you will see that now I have auto wire JDBC template or line number 16. Now, now we know the reason why I have declared this rest template as a bean here. Because later on we will like to use arrest template for over testing. And we don't want to instantiate the rest template-based call again and again. So that is why it's a bean. But we do not need to create a bean of type JDBC because it comes with starter JDBC that we have in our bomb. So all you need to do is to auto wire disliked. And now I can use this JDBC template in my repository layer. So first of all, let's have a try-block. In this try block, Let's write my query, which is very simple. In this case. Select from my table name and its name was my tasks. And then of course we need a list of internal model. We are only talking with internal model here because it's a lower layer. And let's call it tasks. And tasks is equal to. Now we use JDBC template dot query. So if you expect multiple results, you will use query. If you're writing a query for one result which will be returned, you will use query for object. And the first parameter will be SQL for the second parameter. Okay, second parameter will be left to use Java eight streams. It will be dissolved. Set is all set I such that k. And inside this, we are saying that UUID task ID is equal to the set of odd get object. Object is task ID from the database. And the type of that object is java.util dot UUID dot class. Okay? Now secondly, we need description and it will be deserves, it got GetString. And now the name is description. Then we need. Honor. And in this case, the results set GetString and now the name from the DBS. Okay? Now what do we want to do? We want to return as an internal model. So we'll have task ID, will have, will have description. And we're over. Try-block is ending. We need to return over the list. Now this try block, block needs an equivalent catch block that can in which we can have data access exception. And in this catch block, now we will use over defined repository exception. Because then we will know for sure when we look into exception when something fails, that from each layer this is coming from failed to creating four tasks and also throw the exception as well. So we're using the second constructor. Okay? Yeah, It seems good. So now we have over first method at the dB level that is retrieving everything from the database. Now, this method OSU be called from the service layer. So now it's time to get back to our service layer. In this service layer, we have already auto wired over repository. So let's define the method. The method will be public. List is a service layer. So again, we will play with the internal model, not the task JSON model. Let's import the list and let's call it get tasks throws. Now, I will straight away use surveys exception if some exception happens here. Now in this method, Let's try return. Task repository dot, get all tasks from repository. That is what we want. And there should be a catch clause with this that can have repository exception. Let's call it e. And we can throw new service exception. And let's say give the message, not get tasks and less pasta exception e here as well. Okay? So now you see that at the method level I'm, I'd throw surveys exception. And in explicitly in the catch block, I'm saying that I cannot get the task because it is service exception. Okay, so that is how we, that is how the separation of either exception is useful. Now if we get two over Controller, and let's use the full mapping. So x will be request mapping. And in the request to mapping. Value will be Lex get users tasks. And now we need to define method. We are playing with the rest here, and I'm saying that my request method is GET. Okay. Now all this method, public return type will be response entity and response entity should be of type list of now which is controller. So now I'm playing with task JSON dot tasks. Okay? And let's call this gap Tasks. Then. We'll define it inside over method. Let's import over list. Okay? Now, now this is the place where we're going to use our transformer because the return type of this method is task JSON, but the methods underneath it in the different layer using the internal model. So we're going to try list of tasks, okay? And call it list Tasks. And I am saying dusk service, start, get tasks. Okay? And now I'm saying list of JSON. And now let's just called JSON is equal to. Now I need to auto wired my transformer as well. So task Transform, Transform. So now I will call transformer dot. You remember our method to JSON list. And this method will take list of tasks. So now what we're doing is we're actually parsing or transforming one kind of list into another kind. And then of course we will return new response entity. And the new response entity will have Tosca, JSON and HTTP status. Okay, which is 200. By the way, you should know that which at http status return in different cases. And you will know that if you go into this liability and you will see what each status actually means. So if you can go there and you can see that each one have a different meaning. So we added a link dot o k which is 200 and so on. And successful GET request, we want to 100. Okay? Now, we need a catch. In this catch, we will use service exception and let's call it e. And in this catch, I'm saying return new response entity. And I am returning HTTP status dot internal server error. So if I go into the catch clause, then in that case, I will return with the 500 and you will probably see it many times. Otherwise I will get 200. Okay, now we have all the leaves. Let's run our application. It will be done on 8080. So let's go to 8080 users tasks. So you will see when you will hit that. Now we only added one task when we, we're initializing over schema. So now you can see it here. I will add all the rest request for creating, updating and get individual task we asked for an ID and you can get them from the GitHub repository. This is all for this talk. Next time we will move towards testing. And this is the way you will always use these three layers to interact with, between internal and the external model. And this is the best practices of using rest in Spring Boot. I'll see you guys in the next topic. Thank you very much. 8. Integration Testing: Hello everyone and welcome to the sixth dock on over Microservices then spring series. So this diode will be about testing. In the previous lecture we have implemented over it has points. And you can get a quote for all the test points from GitHub. And in this lecture, we're going to see one of the several options that we have for testing. Spring comes with a lot of powerful testing foods, including mock MVC, web test claimed, rest template. So for this talk, we're gonna cover web test claimed. And of course you can read about rest template and other frameworks as well If you will need to use them. Rest template and wept has claimed our two most popular modern Spring Boot testing frameworks that we use for integration testing. So when we talk about microservices, integration tests are very important. I'm not saying that unit tests are useless and nutrient and write them what the real value in a multi-layered application like microservices lies in the integration test. So an integration test is in any test that consist of more than one layers. So web test client and rest template. We'll let you test all three layers along with the dBSPL if you want to go for such integration test. Now there is a problem with integration testing though. So for example, if I want to test an endpoint that is creating a new resource or in our application datas creating a new task. Then if I will use the same database connection as I'm using for my primary application, then the test will pass. But after that, we will have a real entry in the database. And same thing will happen for delete. This is because paused and put and deletion in rest. Apis are not idempotent. It means that if you execute them, that is source will change, whereas the GET request is idempotent it means doesn't matter how many times you want to execute get. You will not actually change anything in your database. But because we cannot test, post and PUT and delete and delete requests to because of this inadequacy. So therefore, what we like to do is we'd like to spin up a different database, preferably an in-memory database. So we do not want to use over Postgres when we do testing in our microservice. Know you'd remember in that is sources of over mean. We have this application dot properties when we define that we are making a connection with the Postgres DB that is running on a specific port in our local host. Now spring gives you an option. You can define application dash test dot properties in that the sources of test. And in this test file, in this conflict file, I am saying that my database should be of type 2 and it should be an in-memory database. It means that whenever my test cases will start to execute, this database will be connected. And after all the test cases will run, this database will be destroyed and close down. These are the default values that we used for H2. And if we go into over palm dot xml, and if I search for H2, you will see that I have r dependency for edge to that I have imported. And I have also another dependency which is called started at a JPA. This is a dependency that you use to handle the edge to database it edited operations. So I have added these two dependencies didn't even install, and now I am able to use his dependencies in my test cases. Now, just like we add schema dot SQL in this, so main resources. And when we do docker compose, something will be populated and over dB, I have another file called data or T-SQL. This is very similar to what we have in the main dot SQL file, but the problem, but this thing will only be executed whatever H2 database. So this is the placement of these files that is making it possible. So what will happen whenever you run any test case? Your application test.py properties DB will be spin up. Whatever you do find in data dot SQL will be executed. So your database will have this one entry and your disk table and whatever tables you like. And then after that, the test cases will run against this database. Once the test cases are finished, the database will be closed and destroy it because this is in-memory. So this is a very good setup for local development because we are actually testing against except database that we want in the main or in the production without sacrificing what, without introducing any noisy know what dB, because we do not want to populate over main database with a useless data that we use for testing. Also in your palm dot XML. What we have what we call vapor flux dependencies. So these two dependencies are actually what we need for web test kind. So whenever you want to introduce, you want to use some modern library. Spring Boot. Always start from the documentation where it will be very precisely written what imports, dependencies or plugins, you will need to add it in your Maven or Gradle in order to use that. So you will always read, you will always add it in your build file, and then you will do me when install. And after that you will be able to use these dependencies. So a very common mistake for, for beginners is the keep on trying using something that they don't even have in the dependency tree. So always start from the Maven and then come to the code, right? So I have created this test class in my test folder, not in my main folder. And it is very important that you had these classes have the name test in them. Because when you run the test cases by typing maven test on the terminal, that is how they know which class is to run. Now, let's see what we can do with the Weber test client. We need Spring Boot test annotation. And in this Spring Boot web annotation, we will say that we have. Spring Boot test. And we want to use web environment, and we want to use random port. Random port means whenever over database will be spin up and our DB will be populated over application can run on any port, not exactly to be 8080. Well it can be any poor 1990, 1990 to whatever it was, whatever will be selected by spring. And after that, Let's write our first test case. First of all, always remember because we are using some external library, so we need to auto wired it. So the name of over liability is Web test client. And now we can use it in what all test cases. So let just like unit test, we have this annotation test to write a test case. And let's call this test case Tosca get. Okay, so we can see Web test client dot get thought URI, URI. You need to suppose to give a real endpoint here. Yet you define in your controller. So we have slash some ID. So this thing is inserted into our database when we will run the test case. So let's test against these ID. And I am saying that in case of this resource, the expected status of the returned result is okay, which is 200. If you remember from the previous lecture in our code, whenever we get all the task or get any task by ID, we returned to a 100. If we cannot find it, we return 500, which is internal server error. So that is how we can do it. We can also check another thing. We can do, exchange. And then we expect body. So we are expecting their 200 will not be empty, but it will return to some body and we are seeing. So this is another simple case in which you can verify whether the results that have returned have some body returned with the HTTP status or not. And similarly, reading through the documentation of web client, there is a whole list of such test cases that you can execute. Now, this is, this will be part of your project, but if you have to write a web test glide K is for or a post request rather than a get. Right? In that case, you also supposed to give the body of the object, in our case, the task, which is internal model. And you need to figure out how to write a test case. And the endpoint post is the edge. We just need to write a test case in this class that validates that you post endpoint is what King. So we have this simple test case for the GET request. And then I will run this test case. What I will get. One test case is running, which is task get. And it is spinning up a DB, which is an in-memory dB. It run the test case and this test will be passed. So if I change this with some other UUID that does not exist in our data dot SQL and does not exist in our H2 database. Then you will see that in that case, what we will get into our test case. Yeah, so we're test case now will not pass. So that validates, that is actually running against the edge to database. Another thing before we conclude this web test client session, you can also run all the test cases by a simple command may even test. You don't have to use the IDE every single time. And similarly, we can also use Maven command to run your Spring Boot application. You don't have to use a launcher button on the IDE. And we actually prefer to do it from the terminal, because when we deal with the servers and the pipelines, we do not have any ID available, so we always like to run and validate these things again, our terminals. So you see Maven test dex, test, it, runs it. And if successfully passed it, if you want to run your application via command by Maven plugin, the command is may even Spring Boot run. So this thing is equivalent to launching your main class with this Play button. So it's up to you which way you prefer, I would recommend to go buy a terminal because this is the best way to figure it out these things. Also, once you add some dependency, don't forget you need to do Maven installed so it can fetch those dependencies in your local Maven. And after that, there is another command which is called Maven dependency result plugins. So if you have introduced new plugins in your palm dot xml, this command will resolve them with respect to the current version of your Spring Boot and new resources. So do Maven installed resolve your plugins and then do Maven Spring Boot Record Maven tests from the terminal. And that is how you will use your web test claim. Now that is all for the day, even the next stop, we will talk about code coverage, which is very important for measuring how much test cases you have against the actual code. And we will see what is a good coverage ratio and what is the best way of measuring your code coverage. Thank you very much and see you in the next talk. 9. Test coverage: Hello everyone and welcome to the lecture 7 on our microservices in Spring Boot series. So in the previous lecture, we actually covered an integration framework for testing which is called weaponized client. And we observed at how if we define a hedge to database in order to avoid pollutant data into our Postgres. Then we can run our test cases against in-memory data that will be destroyed once the test cases are completed. And in order to make sure that the test cases actually use application dash, dash test properties of test rather than the main application dot properties. Always remember you need this annotation at the top. So whatever annotation or active profiles name you give here, it will pick the same application properties file with that profile. For example, if I give application dash, dash test, it will follow application dash test.py properties rather application log properties. And we side Web test client that we had to export two dependencies, Web Flux and the Web Flux plugin because that is what Web test client is pasted. Now there is another very popular entity that we use for integration testing in Spring Boot. And you will see that framework a lot, which is the integration test based on rest template. So unlike web test client, when you use rest template, you don't need to import any dependency in your palm or in your Gradle file. Because rest template is something that comes by default with Spring Boot starter test that we already have in our palm when we initialize to what project from Spring initializer. So we do not need any other dependencies. Now similar to the previous test case, again, we define that over Spring Boot test can run on any port and we want to use application dash test in this test case at, in this class. What I'm doing here is I am using an annotation that is required and as template which is called local server port. And I am giving it the name random server port. So when you start executing this test case, again, it can run on any local server port available. It doesn't have to be 8080 and it will be selected by the framework itself. Secondly, again, I'm auto Whiting restaurant blood, just like I have auto wired web test client in my previous example. So whatever framework you need to use, of course, you need to auto wire that for providing the dependencies. Now in this test case, I want to test get task by ID and point that we have in our controller. So what I'm doing here is I have created a task it JSON object, not the task object which was internal model, but the task JSON object, which is the external model or the event model. Next, I define a base URL. So I'm saying localhost whatever port will be selected, slash, this is the request mapping for my function get by ID. And if I go back to my controller, you will see that in my controller I have a method with this mapping that I can use. So for example, I'm using this URL that will give me get tasked by ID. And it will return 200 if that task exist with that ID. If not, it will send me, but it will give me bad request or 500 internal server error. Now if I go back to my test case, after that, I'm creating a URI from my URL. And what I'm doing is I'm using get phone entity because they want to test a GET call. I can also test other calls, for example, post PUT. So all halls are available with rest template here I'm using getFloat entity. So a good practice exercise for your project is that you should write more tests, more integration test based on rest template for post and put endpoints that are already defined in the controller. Once I have hit this endpoint and store that is LD new URI. The first parameter of GET request is the URI, and the second is the class, the modal class. Now I'm saying here, there's when you hit this URI, what you will get should have the similar body as you define in the task JSON. It means that we are defining over JSON modal class. He's against what we expect when we hit that endpoint. And we are storing the return result and this result variable. Again, like web test client, there are so many ways in which you can test your desertions. For example, the most simple one is I can see that the return result of core value should be 200 because I am using this UUID that is available in data dot SQL, so it should give me 200. If this UUID does not exist in my attitude database, then it should give me 500. Then I'm asserting that that is returned deserve not only have HTTP code, but also have a body and the body is already defined, it should be tasked JSON. Then I'm saying that the value of the task owner as defined here, which is Ray. And in the Data dot SQL it's also re, so what we are getting from the DB and what we are defining it, that clause, if I assert them, the owner should be same. So overall what we are doing, we are hitting a URL. And once we hit that URL, that has template is calling the service, services, calling the repore, who is getting the data and the data is being integrating with my assertions. So do not forget that it's not just a test case for controller. This thing is actually testing your whole application all layers together. So now when I run this test case, similar to the Web test client in our previous example. Again, it spins up the edge to database. It executes that are thought SQL. And then it runs this test case which is only one and lowercase. And our test has passed because this thing exist in the database and these assertions hold. Again, if I run my test cases now by Maven test, then now my Maven test will run to test because we have one test in restaurant plate. And when Test in Web test client and Maven test will pick up all the test classes, all the methods that have testing them. Now. So these are the two frameworks that you will see almost in every spring boot code base. And they are the most common frameworks used for integration testing. Now when we do testing, many companies and many projects have the requirement that your testing should have adequate person days. So for example, if I have written 100 lines of code in my project and test coverage comes out as 50 percent. It means that 50 percent of the code or the lines of code that I have written have been tested. I do not believe that there is anything like a 100 percent code coverage because I can write so many useless rid under test cases that do not prove anything, but they can just increase your coverage. So you should never aim for the 100 percent coverage. With the combination of good coverage and good logical test cases are good percentage is considered to be between 65 and 72 and higher than that is desirable. But 70% is more than enough to prove that you have thoroughly tested your code base. If you are not adding superfluous test cases just for the sake of increasing the person engage in any way, percentage is an important thing to measure. And when you work in these projects, it is a very crucial thing to measure your test cases and the percentage. And you should also able to know which chunks of the code are neglected from testing, from testing effort. So you can put more effort in that specific areas to write test cases related to that codebase. Now what we can do is if I go into POM XML, there is a very important and well-known library that we use for code coverage, which is called J cocoa. So you will see that I have a dependency of J cocoa Maven plugin. And also I have a plug-in of J cocoa. So you need one dependency and one plugin. You need to add these two things in your palm. You need to do Maven installed. You need to do Maven dependency dissolved plugins. And then once you refresh your palm, you should be good to go to use Chaco. Now how it works, if I go to my terminal, then what I can do, I can type maven clean to clean the target folder, then J, cocoa h, and then install to get the package and the new target again. And then I can do J cocoa report. So I'm cleaning my Depot. I'm preparing the J cocoa, I'm installing it to get the new executable files and I'm generating the report. This is the full command that we can use for Jacob. So once I'll hit that, you will see that Maman cleans it. Maven installed, also done all your test cases to make sure that test cases should pause. Otherwise, it will not generate your report. So that is why we are using Maven installed. It cleans, it, compiles it, tested, create the executable, and then generate the report. So there's a comprehensive command to make sure that your test cases are passing and you are getting your test coverage report. Now once we have run that and overbuild a successful, but you can do is we can go into over target. This target now has a folder called site. If I open this, you have J cuckoo. If I scroll down the series of file index.html that contains all the summary. If I click on that file and I said opening browser, then you will see that this is the bird's eye view of my code coverage. Because we have only the 10, two test cases, one using less than plate and when using Web test client over coverage is very poor. Overall coverage is just 24%. And only these packages with some green tint in them have covered some of the cord. Whereas there are others too, in which we have not written any test for. For example, we have not written any test to test over exception. We can also go into these packages and it will also give you the individual present age. So it's a very precise overview of your coverage. And once you have written your test cases and you make sure that you have covered all the unit test and you have covered all the endpoints in your integration tests. You can always generate your report and you can see that Howard's going on with respect to testing, right? So it was over integration testing via two frameworks and over code coverage. Know in the next topic, we will cover logging, what is logging and why we needed, and how we do logging in industrial projects. That is all for this lecture. Thank you very much and I will see you in the next room. 10. Logging: Hello guys and welcome to the lecture number eight on our microservices and Spring Boot series. So in this video we will talk about logging. So first of all, what is logging? So when you do local development in any language, and let's say that you will get some exception, your program gets crashed, or some unexpected behavior happens that throw some specific exception, like we defined deposited exception and service exception two lectures ago. So it begins to appear on our console, in our IDE, in our terminal, wherever you are coding, you can see on the screen, you can see the stack trace and you will find out what is wrong. So you can localize the issue that is going on in your project. But let's say that once I deployed my microservice onto some VM, I do not have access to the console of that VM. Even if I have access to that console, it's tiring process to SSH to that VM to check if I have any exceptions or what is going on with my service if there is something wrong. Especially let's say, if you are dealing with a lot of microservices and that are deployed on different VMs, then you cannot keep on SSH into all those VMs to see the console or that an intent in order to see what is going on or if there is anything wrong. So that is why we need logging. So logging is actually a practice in which you can define a file, a place, or a console where all the logs will keep on accumulating. You can define your lobbying policies. For example, logs can be cleared after specific days. Logs can be clear after they have reached to specific memory. For example, we cannot give infinite memory to over logs. So we usually either use we use to clear them often a specific time or after they reach to specific size on our VMs, because memories expensive on VMs. So in Spring Boot, there are two popular frameworks that are used for logging. One is called log for J, and other one is called LOB Launchpad. So and there is a very popular facade which is called SELF for j. So SELF for J is just a wrapper that can wrap any lobbying framework underneath. I can do logging by using the syntax and by using the semantics of log for J or log back. But let's say if I have to switch to some other logging framework, then it will break my code and I have to refactor all the places where I'm logging some information. The benefit of SELF for Z liability is it actually has a uniform syntax and the way of logging for whatever framework you use in Java. So I highly prioritize and suggest to use a select for j. And here in this example that I'm going to show you, I am using SELF for J and log back. So first of all, if I go into my palm and if I searched for log, so I have a log back classic dependency. And then if I search for ES cell F4, I have SELF for J dependency. So you need these two dependencies to start logging with log back and a cell F4 GO. So again, lobbying is a process that can keep on accumulating the information about your application at a specific location that you can view at 0s after regular intervals of time, rather SSH into different places. Okay? Now, you know, we have our resources folder in our main in which we have over application log properties. So if you're using log back, you need to define or create a file called log back dot XML in your main resources. This is the default location that log bat uses and this file will always be here. Again, I don't have to use an XML file. I can also define my loggers in my code as a Java notation as well. But it is highly suggested. Do not couple your lobbying code with your application because it will be very difficult to decouple it if you don't need it or if we have to switch the framework. So I would never suggest that you define your loggers. The code, keep them separate as an XML file so you can easily change them or enable or disable them. Now, in this file, we have two main entities, and it will always be the case. So the one entity is called a pendent and the other entity is called something that can refer to that appendix, which is called logger. There should be at least one root logger in your log back file. You can define multiple loggers that are associated with multiple vendors, but at least one root should be there. Let's look into a bender first. So I am saying that my appendices should roll by size. It means I wanted to define a limit on logging based on the size, not based on the date. And I'm using a class of framework which is called ruling file appendices. So it means that once that limit of memory is reached, I need to define some rules and we'll see how I define those. Now my logs should appear in my target SELF for J, role by role by size folder, you get give any URL here. Usually when we do deployment, we have our different logging server and we'll give the URL or Path to that server. So logging can be accumulated at a single place to the other two that service. Now, I'm saying that I want to have a role by science policy. And maximum index means that the maximum files that I can have R3, not more than three, and each file cannot be greater than 50 KB, right? So it means that when they start doing logging, when the first file will reach 250 KB, then what will happen? It will rule based on size. Rule based on size means that it will be skipped. And then the next log will appear on the file again. Once again it reaches 50 K B, it will be zipped. Similarly, I can have maximum three ZIP files. And now when I will reach 50 KB again, then the zipped file, the fourth zip file, we replaced the oldest one out of the three. So that is how I'm making sure that I am zipping and saving memory. And of course this 50 KB is not really stick. It is just for the sake of demonstration. We can usually give ten to 20 MB in a small microservice. So we can have considerable amount of logs if we want to find some error or some unexpected behavior. Now the what's going on with the formate or within that vendor, you can easily go through the log back documentation to see how you can define more type of appendices. Now the other part of this file is this root level logger. Now, this is referring to roll by a size of vendor. This is rule by size dependent. So one logger can be connected with one appended. And one interesting thing here is I'm saying that the root level should we debug. Now what is the benefit of doing it in an XML? Now there is another benefit. When we do logging. We have these levels in folk. Debug. Error. Fatal. So in different languages and different frameworks, you may see one or two more levels, but these are the most common levels that we use. Now, we know that we define this logger at the debug level. Okay? So once you set the level of a logger, all the information of that level and the level below will appear. So in my code, if I describe logging statements as logged or debug or log dot error or log dot fatal that will appear. But if I will do log that info, it will not appear. So it's a very powerful thing to set your logging level like this. Because what I can do, I can define different logging statements at different levels all the time. And when I'm doing my development, I can enable the debug level. But when I'm going for deployment, I can just go with the error level because I'm not interested in debug level anymore. So what will happen? It simply neglect the debug statements. Out of the port and it will only log the error statements. So without tweaking any code, I can just change my level from here. And by changing the level, only those statements should be logged data and according to that level or the level below. So for example, if I go to my controller, just a simple example in whatever class you want to use the logger, the SELF for denotation is always be like this. So I'm declaring a static final object called Logger. And I'm seeing that this should belong to this class, not this logger. I can use it anywhere. For example, here, I'm saying that if I hit this endpoint and if I'm getting 200, I'm doing Logger Dog debug. But if I am getting 500, I'm doing logger dot error. It means if I set my logo level 2 error, then this thing will not be logged. Whereas if I said my logo level to debug, both these logging statements will appear. If I will get two hundred and five hundred. So this is why doing it like this will decouple over level logic from the code. And I can easily change it without being worried about the chord. Now, if I will change my logging framework, I don't need to do anything in this code because the next framework will also be compatible with my SLR for J, which uses the same syntax. The only different is that frame, but we'll probably have a YAML or another syntax of XML file of setting the level. But I do not have to refactor my Java code, which is very powerful thing. Now, if I go into my, let's run this application. And after running that, I defined my lager as SELF for j in the target. So you can see it here. So you will see that I already run that few times. So I have three zips and this is my file in which the fresh logs are accumulating. At the moment, my log level is debug as defined in log back dot XML. So let's say if I clear this because they wanted to see the fresh logs for sake of demonstration. And I hit this URL again, in which we are doing log dot d bug in the code. If I go back, you will see now you have fresh information. And if I search for my statement, here, I'm logging request is successful with 200. And this is what I'm logging here. So if I want to deploy it, all I can do is I can just change my level here to log error or the Log dot fatal whatever has been advised in case of your project. And then this file will only show me the statements at the adult level, and this statement will not appear here anymore. So this is so powerful way of handling logging. And of course, in the deployment, I can set my level with the runtime environment variable. And I can give the URL to whatever place I want to set my logs in the form of app dot log. So this is all you need to know about log back in as cella for j. Simple just add dependencies, define your log back on XML. And all you need is one line to start using logger. Logger dot debug.edu dot fatal, whatever your logic is. And you can lock your statements across your whole code base at different levels based on either you want it for development or production. And you can just set the level before go for deployment of production and everything will work without tweaking or changing anything in your core base. So that's all for this lecture. Thank you very much, and I will see you in the next one. 11. Docker: Hello everyone and welcome to the lecture number ten of over Spring Boot and microservice series. In this lecture, I will show you how you can containerize your executable microservice. So it can work in a persistent way, doesn't matter whatever deployment environment is required. So what does it mean? Let's say that we know that in Java, I can generate an executable jar file. And dad JAR file can be simply upload it to any VM. That VM can be CentOs. It can run on Red Hat, it can run on Ubuntu or do you want it can run on ARM processor, on the arm operating system, on Linux. Also, that VM can have Java eight, Java 11, java 15, you'd never know that. So let's say if I deployed, I, I generated my executable file with Java 8. And if the VM have some conflicts in regard to the versioning of Java and other packages, then I cannot guarantee that my service will work as expected. It, it is not environment agnostic. So to make it environment agnostic, we use Docker. So when we use Docker, what we want to do, we want to define our operating system. We create the image of that operating system. We keep the executable version of over file in that image. And whenever that image will be run with the docker run command over service should start running straight away automatically. So doesn't matter if you put that image on CentOS Red Hat on Mac OS. That image is a full kernel based OS itself. So my application will always work in a consistent way. And that is where the present and the future is definitely belongs to containers and it is an important skill to learn. Now first of all, we need to, we need to create a file without any extension at the root of our project. And this file is called Dockerfile. So this Docker file, I'm saying that I want to use CentOS 7 because I have a Java application in that operating system. I want to install JDK. And often installing JDK, I want to define my work directory. I want to copy my jar file from my target folder to slash app slash. And I want to automatically start running this command, which is a Java jar command that is only running this file whenever that image will start spinning dot container. And because over services running on port 8080, I wanted to expose that port. Know once I have this file, I can simply do docker build image and docker run command to run it. But every single time, if I make a change in my code, I need to generate a new jar that can be copied here. With the new changes. I need to type the build command again. I need to type that command again. And I have to do these steps again and again. So what we can do in order to save some time in unnecessary commands, we can use docker compose. So docker-compose is a one layer above Docker. If you remember from our lecture on Postgres, we already had this DB service in our docker-compose. And we all we had to do is to docker-compose up. And then over database starts speeding up. Another beauty of Docker Kaposi's, you're going to find multiple services that will be spent as multiple containers. And those containers can communicate with each other under a single operating system or a different one as well. So now just like the DB I have defined in another service I'm calling in Maya service. Build is space dot. It means build from whatever Docker file is available at this spot. So this line number 4 will actually execute all the steps. I'm saying that this service should depends on DB. And when I say depends on DB, it means that the Docker Compose, we'll make sure that the DB service will be spin up before this miser. And this is important because if you remember, if we don't know what application and the database is not there, then our application gets crashed. So we need to spin up dB before I'm calling my containers bring demo. And of course I'm defining the port 8080. Now we discuss it before. The port on the right-hand side is for the external world and the poor to the left-hand side is to communicate between containers. It means no Many done both these services, my left-hand port will make a connection to the left-hand port of Postgres. Here it seems harmless because both are same, but the inner possibilities that may be these pores are different in an actual code base. And always remember, local host will always make a connection to the right-hand side port. And the containers will always communicate with the left-hand side port. Note this environment. If you look into this line, line number 10, it is similar to what we do find in our application dot properties that we want to make a connection to this database. But here we are saying that it is running on localhost. But no, I'm saying it is running with this container because now my application is not running on local host is running as a container. So one container can access the other container by the name of the container, which is in dB. So I'm not using local host here. I'm using the internal port, which is on the left-hand side. So this is an important thing to remember. Now, if I can comment this out, I can still run my service it by externally, just like we are doing so far. And I don't need to make any change here. If I change this local host two dB here, then I do not need this line here, but every single time I need to run it again outside of the container for some purpose, I need to change that again and again. So this way I'm overwriting that local host with DB here. I don't need to change anything. Now first thing we need to do is to generate this jar file, right? And you will probably say that JAR file will be executed, will be generated just like you do Maven install, but we need a different command. What we need for that, we need Maven package, Spring Boot repackage. So what is the difference between simple package and repackage? When you do Maven package, it will generate a JAR file. But that JAR file will not able to run in a container because that JAR file does not know what is the main clause to start triggering or to start executing. When they used the repackage command. It means that it sets my Launcher class as a starting point of the application. So whenever you need to decolonize your application, you need to run this Maman repackaged command. And in our POM file, we actually have a Spring Boot and Maven plugin that is defining a repackaged command. And it is saying that the main class should be this one. And so it knows what to run when we started. So what will happen many times docker-compose up because it depends on Db. Db will be initialized and triggered. Then this service will start. It builds it based on this file, copy the JAR file, expose the port and run two services now instead of one. So now if I do docker compose up, so it's creating two things. Now you will see that I have to Learning Services. 14, the Postgres 14, the spring demo. And the spring demo is of type spring, right? So now this is the URL. If I refresh it, I can still get it. I can still get past based on ID. Now Maya Services same, but now it is running in the image which is CentOS 7. And whenever I run it, I can run DB and service together. So with one docker-compose, I can trigger as many services I want. If I use the command docker image ls, you will see now my system have this image for my spring demo, my service that I defined. And I have this for Postgres, this for Santos. So this sentence is a base image for my spring service. So my system now has a base enters image and offspring demo my service image. So what I can do is wherever I want to run my application, all I need to do is to take this image and deploy it on any VM. And it does not care what version of Java or what flavor of OS that VM runs because everything is packed inside the same image, which is based on Santos. And the port 8080 is exposed to the external world. So everyone can still use this service. But now we have Docker iced everything. So it's a very powerful thing. Also, if I run the command docker container, ls, you will see it shows that two containers are running. One is based on postgres image, one is based on spring demo, my service image. And it also shows you that this container runs with Java jar command. These are the ones with the Docker entry point command, and it also shows you the pores they are running on. So these common Docker commands, I would highly suggest to go through these commands. So they are very useful and most of the companies are moving towards Container Orchestration. So Docker or some other container tool is very important to learn along with your microservice architecture or in any other domain nowadays. Because the only way that you can learn about Kubernetes and the stuff at the upper level is by going through the containerization. And this is one of the easiest ways of doing containerization of okay. So now for example, what I can do is I have this image, spring demo, my service. And I can just upload it to any place. For example, companies usually use a centralized repository called Artifactory. I can just upload this image to that Artifactory. And anyone can pull this image from there, just like the pool Postgres in our docker-compose. And any person then can use that image on any VM. Or if he wants this service as a producer or consumer to work with his or her. So this consistency is a gift of Docker and I highly recommend to actually go through the official textbook of Docker to develop more understanding. But in context of this course, this is all that you need to Docker rise, your Spring Boot microservice. And now your services consistent, consistent on every VM, whatever the flavor of OSU or running. Right, That is all for the lecture. And I'll see you guys in the next one. 12. Monitoring and Conclusion: Hello guys, and welcome to the last lecture of over microservice and Spring Boot series. And this lecture is about monitoring. So when we spin up on our microservice and the successful GET endpoint, You Yi returns to a 100 and non-successful point to yield returns 550 set a positive error, or sometimes 403 if the resource is not found. So first of all, you need to go through the list of all these HTTP code and status code to be sure which one to use in what scenario. You should be very clear about when to use 400, three, 500, 500 to 200, and 500 one. And what they mean. Secondly, when usually in a medium size companies, when a lot of microservices are spin down, usually we need some sort of a monitoring. We need to know what is the ratio of successful and unsuccessful transition from one endpoint to another? What is the issue of successful consumption and unsuccessful consumption between the services? What is the uptime of the services? And all these kind of information. How much heap services using, how much CPU from the data center services using to see if there is a churn rate, if there is something very inefficient in the services. So there's a huge range of metrics that we want to monitor and there are so many fancy tools. So to such tools that you will see a lot in industry when specially with Spring Boot, our Prometheus and Grafana. So Prometheus is something that we use as a data collector. It is like a source which gets data from your Spring Boot application. And Prometheus have very basic graphs that you can display as well. But grafana is something that displays very comprehensive and very detailed graphs about a microservice. So we will use Prometheus for data sourcing and gathering, and we will use Grafana to connect Prometheus Grafana. So first of all, the question is, from vermin teachers getting the data. Now, if we go back to our docker-compose, we had to services in IT 104 over service, my service 14 dB know I have added two more. One is called Grafana, which is running on port 3000. And it depends on Prometheus because data collectors supposed to be up before the data visualizer. And the Prometheus is running on 1990. If you look into this line, I am saying that Prometheus dot YAML should be taken from the current path and Prometheus dot YAML. So what is Prometheus taught GMO? I created this file at the root of my project. And this is actually ascetics file for permit. Yes. I am saying in this file that my Prometheus should scrap data from my service after every five seconds. And it should display the data at this spot. And this is actually the IP of my machine where I can see that an increment, yes, this is a very basic conflict is a very basic template file. And the first question is, what is this actuator? So first of all, let's get back to docker-compose up and you will see no, it will spin up for services instead of two that we were spinning before. So that is why Docker is so powerful. You can spin up as many services as you want. So you will see in our file do docker container ls, I have Grafana, Prometheus, I have Postgres, I have my service. So for services on these cores, just beautiful. You don't need to do any effort in configuring these pores yourself. Okay, so now if I go to the browser and if I try to hit 8080 slash actuator, you will see all this information. So actuator is actually you can enable it from your Spring application. So for example, if I go back to my application dot properties, this line management dot endpoints, dark web exposure. This is actually telling that I am enabling my actuator. Okay, So that is where it's coming from. Now, if I go back to my other two services, I need to add these lines to enable promiscuous in my application log properties. So I will add these lines. And I'm saying that permit here should be enabled. So it can scrap that from that actuator endpoint accurate and endpoint is used by Spring to actually throw the data and the health-related checks of a microservice. I will do docker compose down and now I need to repackage it again, because now I have changed something in my code and it's important to do repackage. So I can have a new JAR file that will include these permutate settings. So these are all the settings they are to use in your spring microservice configurations. And over Prometheus is a running in a Docker container. So once I will generate the new JAR file, what I will do, I will do docker-compose up and then I will see the actuator endpoint that permit yes, is there or not. So that's docker-compose up. Okay. So running for services again. So let's refresh it. Okay? And here is my Prometheus. And this was the part that I defined in my Prometheus dot YAML. Now if I go to this point, you will see that it will gather all the information all the way from CPU usage to heap and to what end points you are hitting. So for example, if I hit this endpoint, an actuator will tell Prometheus after every five seconds. That's some endpoint has been hit. You'll permit? Yes. Has a record of this head endpoint that has been hit and it will appear here. So this data will gather everything. So for example, this is the all the history that we hit slash actuator which were successful then we did slash actuator. So that is how it keeps a gathering data after regular intervals, and that is how it works. Okay, So now you will see that I have this users that task that I have just hit. So it means that, that I keep on coming to this data source. Now because we are running pro meet here on port 1990. So if I go to my browser and I hit 1990, this is my Prometheus. You can search for some metrics here. And it can show you if there are any options that are available if you have configured. For example. Yeah, for example, you can have posed, by the way, you need to go through these metrics, parameters to do and what they actually mean. But you can have all different types of metrics. If you read the documentation, you can just keep adding them here. And it will begin to appear with the graph. But this is not a very nice graph and this is very difficult to search for different parameters even if we don't exactly know what is the exact name is. So we do not use Prometheus, we will use Grafana. So in Node application, we're going to find now is running on port 3000. So what I will do, I will hit or 3000 and it will take me with a default login credentials. So on Odysseus microphone. Now, noah, first thing I need to do here is to connect my data source where my data is coming from. I go to Data Source. I added at a source. It's Prometheus. And it is running on TTP dot 1990. And let's see if it can. So it gets it. Now. Observe 1. I did not add localhost here because Grafana and Prometheus or both running as containers, no one is extraordinary in the external world. So I use the name of the service, just like I did between my service and the Postgres. Save and test. All good. Now what I can do is I can import a dashboard so you don't actually need to create a dashboard from scratch. Grafana communities valued each. For example, I can search for JVM Grafana dashboard, and it will show you so many options. And you can just copy this number. I can paste this number here. I can load it. And I can select Prometheus as a data source. We're, data's coming from. I can import it and look at this. How beautiful is this? Now, with every passing time, with every 5 second, the Prometheus will keep on scrapping the data from your actuator of Spring Boot and your graph or novel keep on showing this. So it will take ten to 15 minutes to actually start populating the fields. But if you draw it on your machine, you can see that after 10 to 15 minutes you will have amazing detailed about JVM memory, about IOS, about memory pools, how much heat you are using, what is the efficiency of your garbage collection and how you buffer is working? Because this is a JVM dashboard. Therefore, this all information is related to the JVM. If you look for either dashboards that are related to specific Rest services, to see how many have 200, 500, you can just search the grafana community. And there is dashboard available for every possible thing you can think of. So you can import and after import, you can also customize them. So I can hit some endpoints. For example, I can hit my get an endpoint. I can hit to my selective ID. And it's still, I think it's still needs more time to scrap some data. But once you've dried on your machine, once you hit this URL of probabilities, make sure that everything is in according to plan and it will keep on hitting them. So for example, you see again these to 200 endpoints are here. And it takes some time for Grafana to start scrapping your JVM and your CPS processing. Because it processes the heap and JVM based on some extensive operations. And after that, you can populate your dashboard and you can just leave it running. So in the actual world, actually we deploy over dashboards on a different VM. We just call monitoring VM. And we actually send the data source data like Prometheus did another one, very popular one which is called graphite. So over graphite and Prometheus usually run on the, on, on some VM that's grab data from all the microservices. And then a different VM, which is called visualization. Vm runs grafana or something similar that gets data from multiple data sources and show different graphs. So rather than investigating services by SSH into VMs and by looking into different places, you can have one many very concise and precise dashboard that you can use to monitor all this information, right? And it is even more beneficial if you use a dashboard which is related to http statuses or the service status. By the way, if I hit 8080 actuator, it is also a health endpoint. If I hit that, you'll see we get status. So this is a very simple endpoint that is used to tell the monitoring people whether the service is up or not. So similar to these, you can explore all these endpoints in the year purpose. And you can decide which ones of these are useful as per what you need from a service. So that is all from this lecture and that is all from this course. So hopefully it was useful. I will add a project that you're going to attempt to yourself by taking a template from GitHub. And so we discovered, and we started from the very basis of the Spring Boot, we learned how to write three-layered application. What is transformer? How to do integration testing, how to do logging, how to do code coverage, how to Docker eyes your application and how to monitor your application. I'm hopefully it was useful and you learn something. Spring is amazing and it's employment opportunities are huge, at least for the jacket. So good luck to all of you. And I will speak with you guys in some other course. Thank you very much and goodbye.