Architecting an ASP.NET Core Application | Trevoir Williams | Skillshare

Playback Speed


  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Architecting an ASP.NET Core Application

teacher avatar Trevoir Williams, Jamaican Software Engineer

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

41 Lessons (8h 34m)
    • 1. Introduction

      1:30
    • 2. Understanding Clean Architecture

      7:03
    • 3. What We Will Be Building

      4:22
    • 4. Setting Up Solution

      2:55
    • 5. Creating The Domain Project

      6:08
    • 6. Creating the Application Project

      7:14
    • 7. Implementing Automapper

      12:19
    • 8. Create Queries with MediatR

      16:18
    • 9. Finishing up Queries for MediatR

      7:42
    • 10. Create Commands with MediatR

      12:35
    • 11. Finishing up Commands with MediatR

      21:39
    • 12. Adding Validation

      27:27
    • 13. Adding Custom Exceptions and Response Objects

      13:14
    • 14. Additional Refactoring and Considerations

      9:39
    • 15. Section Overview

      1:46
    • 16. Adding Entity Framework Core

      5:37
    • 17. Implementing Persistence Layer

      15:27
    • 18. Add Infrastructure Project (Email Service)

      15:59
    • 19. Create and Configure Application API

      11:44
    • 20. Implement Thin API Controllers

      14:56
    • 21. Finishing up Thin API Controllers

      4:29
    • 22. Seed Data In Tables

      4:47
    • 23. Review Swagger API Support

      8:26
    • 24. Unit testing - Section Overview

      3:23
    • 25. Write Unit Tests for Application Code

      28:25
    • 26. Setup ASP.NET MVC Project

      2:14
    • 27. Use NSwag for API Client Code

      8:35
    • 28. Setup Custom API Clients and Base Code

      14:18
    • 29. Setup Leave Type Management Service

      12:11
    • 30. Setup Leave Type Management UI

      27:55
    • 31. Add Json Web Token (JWT) Authentication to API

      36:55
    • 32. Add Authentication to Web Project

      30:35
    • 33. Setup Leave Allocation Management

      18:50
    • 34. Setup Leave Request Management - Part 1 - Employee

      20:53
    • 35. Setup Leave Request Management - Part 2 - Admin

      24:29
    • 36. Unit Of Work For Batch Operations

      8:40
    • 37. API Exception Handling

      13:36
    • 38. Handling Token Expiration

      8:40
    • 39. Handling Token Expiration

      13:51
    • 40. Improve Data Auditing

      5:54
    • 41. Conclusion

      1:03
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

36

Students

--

Projects

About This Class

Overview

Creating a modular, testable and maintainable application in .NET Core requires a solid foundation. Setting up an application architecture requires foresight and much consideration as early decisions will impact how easily the application is extended and maintained.

In the long run though, applications need to be maintained and in this case, extended. Between its design and the way the code was written, neither is really possible and so the application needs to be redesigned and future proofed.

Why SOLID Architecture?

When we talk about SOLID architecture, we are referring to isn’t a straightforward task. Decisions made early in the process can have a large impact later on, and maintainability and testability play an important role. Adopting these practices can also contribute to avoiding code smells, refactoring code, and facilitate more efficient agile development.

SOLID stands for:

  • S - Single-Responsibility Principle

  • O - Open-closed Principle

  • L - Liskov Substitution Principle

  • I - Interface Segregation Principle

  • D - Dependency Inversion Principle

In this course, you explore foundational architectural principles which help with the creation of maintainable code. Next, you discover how to set up a real-world application architecture with ASP.NET Core. Then, you’ll learn how to plug in different, common blocks such as email, authentication and have a foundation to plug-in other third-party services as needed.

When you’re finished with this course, you’ll have the skills and knowledge of creating a testable and maintainable ASP.NET Core application needed to architect real-world, enterprise .NET Core apps.

Build A Strong Foundation in .NET 5 Clean Architecture:

  • Learn Clean or Onion Architecture and Best Practices

  • Learn Command Query Responsibility Segregation (CQRS)

  • Implement Mediatr Pattern

  • Add Email Service using SendGrid

  • Domain Driven Design approach to software architecture

  • Efficient Exception Handling and Routing

  • Implementing Unit Testing

  • Global Error Handling with Custom Middleware and Exceptions

  • Adding Validation Using Fluent Validation

  • Build a .NET Core API and MVC UI Application

  • Implement JWT(JSON Web Token)  Authentication

Content and Overview

To take this course, you will need to have some knowledge of .NET Core development and C#.

This is a huge course. Over 10 hours of premium content, but smartly broken up to highlight a set of related activities based on each module in the application that is being built. We will also look at troubleshooting and debugging errors as we go along; implementing best practices; writing efficient logic and understanding why developers do things the way they do. Your knowledge will grow, step by step, throughout the course and you will be challenged to be the best you can be.

We don't do things the perfect way the first time; that is not the reality of writing code. We make mistakes and point them out and fix them around them. By doing this, we develop proficiency in using debugging tools and techniques. By the time you have finished the course you will have moved around in Visual Studio and examined logic and syntax errors so much, that it will be second nature for you when working in the .NET environment. This will put your new learned skills into practical use and impress your boss and coworkers.

The course is complete with working files hosted on GitHub, with the inclusion of some files to make it easier for you to replicate the code being demonstrated. You will be able to work alongside the author as you work through each lecture and will receive a verifiable certificate of completion upon finishing the course.

Meet Your Teacher

Teacher Profile Image

Trevoir Williams

Jamaican Software Engineer

Teacher

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Introduction: Hey guys, welcome to my brand new course, ASP.net core, solid and clean architecture. I'm your instructor, Travolta Williams. I have over 10 years experience as a software engineer and a lecturer. In this course, we're going to be looking at quite a few things. We're going to look at implementing solid principles and clean architecture in an ASP.net Core application. Along the way, we're going to work with advanced tools like mediator or bottom-up or Fluent API for validation, we'll be building Global Exception Handling and logging. I will be looking at the C QRS pattern, which is a wonderful pattern that keeps our code segregated and in bite-size is for maximum integration and extensibility. At the end of this course, we're going to understand how to do unit testing, how to integrate third-party services into an application and look at how we can deploy it for production purposes. Though, the requirements for this course include Visual Studio 2019 and dotnet five, or the latest version. At the time you are doing this course, everything we're going to be doing is future-proof and these principles can be transferred to the latest versions with no problem. To get the most out of this course, I do recommend that you have some amount of C Sharp and dotnet programming knowledge as well as database development knowledge. Either way, I will make the content very beginner friendly and you should have absolutely no problem following along and know that you have all the information you need. I'll see you soon in the course. 2. Understanding Clean Architecture: All right, so we've looked at the principles that govern clean application architecture. Now let us look at what exactly we mean by clean architecture. Because clean architecture doesn't necessarily mean good software. Good is very relative to who is viewing it. For instance, if you were tasked with developing a lead management system for the HR department and you did it, and HR is happy, then it's good software. However, if you didn't do it following these principles, then it could be seen as bad software, but practices bud software development in general, by your team or whoever your successor is when they're trying to maintain this application. So clean architecture is not necessarily directly proportional to good software. It depends on who is the recipient out and at what point. So let us just discuss everything that goes into software development and the different types of architecture so that we can fully appreciate when we would need to step up our design considerations a bit more and what the pros and cons of this are. So the first one that we'll want to look at would be all in one architecture. And I'll all in one architecture, it's easier to deliver. It can be stable and seen as a long-term solution. All in one architecture could easily be when we scuffled at brand new is be dotnet Core application. Or if this is your first time doing ASP.net Core, you're probably more familiar with Ruby on Rails or larva or jungle. Once you have that general project layout, that could easily be seen as an all-in-one architecture. You have all the folders and all you have to do is create your files, create your full does that separation of concerns right there and everything is good, the software will work. However, it is definitely difficult to enforce solid principles. The more that you have to pack into this all in one architecture is the more code smells and bad practices that might have to be compromised. All right, so as it grows, it becomes harder to maintain and its testability. And when we sit this stability, we mean like with debugging and by extension unit testing, the ability to test two units of this all in one architecture. That ability diminishes as the application roles. Next up, we want to look at layered architecture. And all layered architecture is a step above all in one where instead of separating our modules or our chunks off code and files, sorry, by folders, we start looking at creating projects and referencing these projects. So in ESP dotnet Core and in its obligations in general, you can develop class libraries where you put all of your classes and you just make reference to them as you need to. And then you'd end up with different layers. You have a layer that deals with the UI stuff. You have a layer that needs with the business logic. And you have a layer that taxes the inflammation between database and up business logic. And then you end up with a layered our application. Now, it's easier, definitely to enforce solid principles and it's easier to minton larger code bases. However, this still access one application because all of these layers are still somewhat dependent on each other. So we're still not fully taking advantage of the whole dependency inversion principle, as well as the whole loose coupling principle. And now finally, we're looking at The only on architecture which is largely taught it as clean architecture. Now when we talk about only on architecture, we're talking about different way of seeing layers. And at this stage, we want to support any kind of application. Want to have everything very modular so we can put in a new module without disrupting the rest of the code base. We can take auto module and we can make changes and test more easily without disrupting existing parts of the application. So when we talk about the Odeon architecture, we're talking about maybe having some client application for the user interface. This could be an ASP.net Web application. This could be a mobile application. We're talking about both having API services that will sit on top of the obligation on core. And these API services, once again, would be open source. It doesn't really care what the client is and risks is what we'll be using. But it doesn't care if it's a mobile client, it doesn't care if it's a web client, a blazer client, and anger a client, It's just taxiing data from the database to whoever is listening. And Buck, No, testing becomes much easier to do because we can test the different layers independently and then we can test how the layers interact with each other. So if anything is introduced to break any of these existing interaction, then we can detect it more easily. And from early null for the infrastructure we're talking about the module. So we're talking about logging data access. If we have more than one data bases that we have to be dealing with, we should be able to put in a brand new layer to talk to undo database without disrupting anything that the existing architecture was doing with the old database. We can put in logging, we can put in mapping, validations. Everything can go in and our commode without disrupting our application. So cons, however, there is a learning curve because you need to know what was weird and there are different interpretations of this kind of architecture. So you may see different flavors. If you read two or three different sources, you're going to see two or three different implementations of its enemy. Wonder which one is the one I should use? Of course, context determines a lot of what you and other developers do, the decisions you make in your architecture. And so, you know, you just have to use that, just get the knowledge and use your context as your guide and other con to this kind of architectures that it can be very time-consuming because there are a lot of their lot more files that will be involved when we could do something with two files now it will probably require five. However, once again, the benefits would be that we have loose carpeting and more testability. So with all of that said, be careful. Not every application needs, quote unquote, clean architecture. Good software meets the business needs and maintainable software increases the lifespan of the software. So you have to strike a balance at all times. Don't see that every project you have to do is going to need clean architecture, starts small and extend as needed. 3. What We Will Be Building: Welcome back. In this video, we'll be taking a look at the application that we want to build or rebuild. So on my screen I have a lead management system that was built with a speed on it core 3.1. And you see all of the things that were implemented inside of this solution. Now, it looks good and it does what it was designed to do. It delivered leave management capabilities to the HR department. You can find the source code for this application on my GitHub account and you can look for the repository leave dash management, dash dotnet core. In fact, this is the application that was built in my course, complete ASP.net Core and Entity Framework Development. Why did it was perfect for beginners? It's true with a lot of code smells and inefficient ways of writing the code which a beginner can pick up on. But then as you get more intermediate to advanced, you'll start realizing that they are better ways to do these things. And that's exactly what we'll be discussing in this course. Now, I'll just take and look at the project structure. And once again, it is completely optional to download it. You can just follow along, but I will be showing you some of the weak points of the architecture and some of the decisions that were made during the development of this application. One, you will see that this only has one project. Now if you remember earlier when we looked at the different kinds of project architectures, this would fall right into that all in one architecture wherever thing is just in one project, easy to access. So once again, for beginner, that's fine because you want views, you get to the views you want me to go to data. It's all their separation of concerns at this point is limited and restricted to just folders. However, in the bigger picture, and when you want to add more things, this project can get very fat as you try to extend it, so it will be good to break it out. Another thing our code smell that we would want to address is repetition of code. So even in building our data models, you'd see that we kind of repeat some of the properties. We have ID there, we have ID here again, those little things in some of the controllers, we repeat code when we don't necessarily need to. And then while we're on the topic of the controllers, there are what we'll call fat controllers here. So we're doing a lot of business logic, a lot of processing right here in the controller, which makes it very heavy. So you want to kind of obstruct those business logic processes away from your controllers. Now, outside of the development reasons for redesigning and re-architecting this application from a business side or business perspective, one, HR wants to extend this application the way it's built, right? No, it is fairly difficult to do that without upsetting existing code. That's 1, 2, it's not very testable in its current state. So even if we try extending it, we are going to have more owns off testing because we have to do more regression testing and manual testing on that. And then three, when we do it in a clean architecture wave, we actually have the option to extend it to different kind of client applications and potentially deployed as software as a service where the API is carrying out all of the business logic and interaction with the database and the client application is completely anonymous that we had. This client application could have been an MVC application, he could have been a blazer application, it could have been a mobile app. So that is one of the value proposition reasons for why key and architecture may be implemented. So going back to what we said earlier, good software is software that does its job, right? No, this is good software. The HR department is quiet, happy with it and how it does, what it needs to do. However, when they start asking for more things, we realize on our technical said that it becomes a bit more difficult to extend it, to modify it, to test it. So where we are protecting it so that we can still deliver all the functionality that the HR department wants while retaining the ability to maintain it in a very efficient manner. In our next lesson, we'll start setting up the solution for our new architecture. 4. Setting Up Solution: Welcome back. In our first practical lesson, we're going to be setting up our solution. So I have Visual Studio 2019 open. If you don't have Visual Studio 2019 already installed, you can easily go to Visual Studio dot Microsoft.com slash VS and download the community edition. The Community Edition is free for educational and individual usage. Of course, if you're, if you're a commercial than you're expected to get the professional or the enterprise version. Once you have downloaded it, you will be presented with an installer. And what you really need from this installer, at the very least is the ASP.net and web development workload. So by ticking it, you're indicating that you want that work through it. You'll see all the workloads available. You can take them if you wish. Of course, the more you take is the more you need to download. But for this course you really just need this one which will give you all of the tools needed for any ASP.net Web application as well as dotnet five. So once you've completed that setup or if you already had it installed and continue along with me. And we'll go to create a new projects. So what we're going to do is create a blank solution. I have in my recent project templates. But if you do on top of that, you can always search for blank solution in this area and then you'll get an empty solution containing no projects. So that's what we want and we're going to be calling this project H are dots leave management. Now there's no particular reason for me to call it this. It's a lead management application is being built for the HR department. If it was being built for a company, then you probably would want to say company dot the type of system. So if you want, you can go ahead and use my name or change that name accordingly. But once you have put in a name that is valid, then you can go ahead and click Create. Now we won't be writing any code in this lesson, but we will be putting out that project structure as we need it for the duration of this course or this whole project. So the first thing I'm going to do is add a folder and I'm going to call this one SRC, short for source, and another folder that we're going to call tests. Now under SRC, I'm going to add a few more folders. I'm going to call this one the API, because that's what will interact with our client application. I'm going to have another one called core, one more called infra structure, and finally, one for the UI. So that's all we're doing for this lesson. We're just setting up the project structure. So when we come back, we'll start creating the different projects that will go in each of these sections. 5. Creating The Domain Project: All right guys, welcome back. In this lesson, we'll be creating our domain projects. So our domain projects will go underneath our core folder inside of our application, we can go ahead and add a new project. And what we're going to be doing is adding it as a class library, C-sharp Docx library of type dotnet standard. All right, so it's not in a starboard allows code sharing across many different platforms. So we'll just use that one for all of our class libraries. And we're calling it HR dot leave management domain since it will be used to store all of our domain objects or entity classes that get translated into our database tables. So I can go ahead and hit Next. And the target framework here is dotnet standard 2.1. Go ahead and hit Create. Now once that is done, we get our default class one, which we can actually delete and then start populating with our classes. So going back to our project on GitHub, just quickly, you'd see that the domain classes were held inside of the project in a folder called data. So our domain classes included leave allocation, leave requests, and leave type. If you want, you can go ahead and retrieve them, but I will be going through them anyway. So if you just take a quick look at these classes, you'd see that each one had an ID, and we also had a bunch of properties in each one. So when properties that kind of repeated would include date created between the leaf type and the leaf allocation record. And then it's inconsistent because leave management or sorry, leave requests doesn't really have that. So we'll be fixing some of those inconsistencies, as well as reducing the repetitions between them. So I'm going to start off with leaf types. So I'm just going to go to the project click Add new class. We're calling it leaf type. And I'll make this public forum No, inside leaf type, what we're going to have our fields for ID, name, Default days, and they each created. Now I did specify that date field, like date created is really an audit field and this needs to be phone across the different classes. So I'm just going to go ahead and add all the entity classes so you can pause, replicate that. For the leave request, we're going to have an ID field, start dates and end dates, leaf type. The data requested, requests, comments, date, axon, and approved in the form of a nullable boolean, and canceled in the form of a Boolean. So you can go ahead and replicate that. No, if you're comparing this with what is on the GitHub reference, there are some fields missing, which we're not quite ready for just yet in the form of the employee, which would be some user related data. We don't have that yet, so we're not prioritizing that we can introduce that later on through some migrations. Know the third entity that we're going to add is leave allocation. And for this one we're going to have the ID, the number of days, the date created, leave type, leaf type ID, and period. Once again, I've kind of omitted the employee related fields as well. Those data annotations. Now in terms of repeating fields versus the need to have consistency with at least the visa need to be repeated, namely the date created. Usually when we're talking about auditing, we want to have date created, who created it, the modified date, and who last modified it, right? So that way we can keep track of everything happening across it as almost all fucked. We would actually solve some of the things with the whole referencing to the employee and so on with these auditing feels. So what I'm going to do is introduce another folder in this domain project. And I'm going to call this folder common. And then inside of common, I will have a class that I'm going to call this domain entity. No one knows my base domain entity have, well, it's good to have all of the base fields that I know every class or every domain entities going to need. I'm going to make this an abstract class so that it cannot be instantiated on its own. However, it can be inherited by everyone else. So this is going to be ID because every table is going to need an ID. I'm also going to have for other fields, one for the date created, then created by then a date-time for the Last-Modified data and another one for the last modified by. Now with this base domain entity, I can know let everybody else inherits. I can just go back to my actual entity class and let it inherit from the base domain entity, including any missing references using control dot. And there you see that these fields that I have INSEAD leave allocation that have the same names as the ones in based on an entity are lighting up. So that means I don't need to repeat them. Alright. That looks a little cleaner. All right, So I go and leave requests than I do the same. Go ahead and inherit include the missing references and then all of a sudden I don't need to see ID over here either. And then in leaf type, I do just the same thing, including missing references. And then I can remove all the repeated fields. So are based on an entity gives us access to certain fields that are expected to repeat across every entity as we all want them to have these. All right. So with all of those changes made, let's go ahead and hit Control Shift and B to do a build and we have a successful build. So we can move on to the next task. 6. Creating the Application Project: Hey guys, welcome back. In this lesson we'll be discussing our application core layer. Now the purpose of this layer is to sit in between our application, application being whatever would be wanting to access the database and the database. So in this layer will be defining all of those access parameters to mediate between the calling application and the database itself. So this project will sit directly inside of our core folder. So let's go ahead and add a new class library. And I wanted to be calling this one HR dots, leave management dot application. Go ahead and add it once again, we're using dotnet just under 2.1 and go ahead and create. So now that we have our class library, I'm going to remove the default class, and then I'm going to add two new folders to this project. So the first one is going to be persistence and then encipher systems, we're going to have another folder called contracts. So inside of contracts we are going to be defining all of the obstructions for our repositories. Now the jury's out on what are the best ways to implement this. Once again, there are a few opinions is the proposed architecture, and there are a few opinions and some opinions get implemented based on context. But I'm just going to explain why we're using the repository pattern. So one, in the, in keeping with the dry principles, we don't want to repeat queries too often. Also, you might have specialized operations that you need to carry out on any one of these domain objects. So when you define a repository per domain object, then you can have custom quote or customized methods inside of that repository for these, right? No, we're not implementing the repository though, we're just defining the contract. So the contract here, the first one will be an interface and I'm going to call it ij generic repository. I said class here, making a class. But even if you use caution that you can just make it a public interface. Now we're going to tell this interface that it will be implemented relative to a class called T. So we're just using generics in keeping with the name generic repository so that any one of our domain objects can be used to access database related functionality through this interface. So some of the methods that we're going to implement in this interface include a getMethod, so we have task T. And then you can just go ahead and include any missing libraries. We're also going to have task, get all what this one is returning. I read only lists, know if it's the first time you're seeing this data type. Usually, and personally I usually use a list or a nihilist or one of those more popular Collection datatypes I read on a list. That's benefit is that it keeps it in a read-only state so it cannot get modified once it has been pulled from the data mesons sent over. So less trucking and less database related operations after the data pool. So we'll look at it in a few, but that's the datatype I'm using. Once again, you could easily have used any other collection type of type T. Could have been a list, could have been a collection. But then I read on the list is serving the purpose I wanted to. We're going to have the task to add on where also Linda have tasks to update and delete. So you see that the generic repositories not only generic for t, but it's also a generic in terms of the functionality provides which are the basic and standard crud functions. There's also a naming convention where people would put on the word async because you can see where all using tasks here. So some times you would see then define it as a sink. So it's obvious that this is going to be an async function. So you couldn't follow that name convention if you want. I am not going to, so we can move forward. Now the thing with the generic repository is that once again, it's just the generic crud functions that every single database table may ever need to perform. However, when you have something specific to leave allocation operation or a leave request operation, you can't build the salt and the code might get missy if you try to build it out in certain parts of the application. So what we're going to do is in comment specific repositories based on our generic repository or to extend our generic functionality, but specific to either our anyone of our domain objects. So I'm just going to go back to contracts and I'm going to add a new item in the form of a class once again, but it's really going to be an interface and I'm going to call it, I leave request repository. Once that's created, we make it into a public interface. And then I let it know that it is extending generic repository and it is specific to our class type of leave request. Alright, so leave request is our domain. Remember that this was relative to type t. So this is the T that we're passing in, but then you'll see here that it doesn't know anything about say 70. So we need to add reference to HR management domain. So you can see that in all honesty, that hasn't quite worked for me when I use their side, just go over to dependencies. Let me do that again, sorry, dependencies at project or a friend's. And then I just take the reference to the domain objects. Click Okay. And then I can know, include that using statement here. All right, so inside of this repository, then if we have specialized functions relative to only leave requests, then we can define them here with the modelling of the generic repository and without the other specific repositories needing to see it, we want to keep everything kind of contained in one place. So following this example, I'm just going to go ahead and defend the other repositories. I leave allocation and I'll leave type repositories. So once that activity is done, we end up with two other interfaces. So we have our generic repository. We have or I leave allocation repository, which is implementing or inheriting or other I generic repository relative to leave allocation. And then we have that again for leaf type with I leave type repository extending generic relative to leave type. So that's really it for this lesson where they're sitting up the contracts. And those contracts are in the forums of these interfaces. Because remember that we don't want to directly interrupt with the code when we're calling what we're going to be dealing with abstractions that way we can make code changes without disrupting too much off the quote from the client-side application. 7. Implementing Automapper: Hey guys, welcome back. In this lesson we are going to be sitting up auto mapper. Know if you don't know what autumn upper is, that's no problem. It is really just a library that helps us to convert between one data type to another. Simpler, right? So the context behind why it will be useful is that we'll be implementing the mediator pattern in a following video. But that mediates or pattern will be acting as a messaging mechanism between what is coming from the plant and what needs to go to the database. It is, generally speaking, bad practice to interact directly with the domain objects, at least before we get to the repository. That means we shouldn't be allowing any calling application and the client, any application that is talking to our application on core to be sending us directly or direct objects off the domain object type. Instead, we create obstruction. So we create data transfer objects or view models. You can call them either one, but they're really just crosses that kind of look like the domain object, but have restrictions based on the operation. So edit operation has different requirements from maybe a create operation. And so you would present the id 1 and id wouldn't be presented in the other. Once again, all you have are obstructions for that. You can get granular or you can keep it general. That's up to you. The point is alter mapper will help with the conversion between the obstructed or the client friendly version of this object and the database friendly version of this object. This is going to promote loose coupling throughout our application. So with all that said, the setup is not going to be very complicated for this one, what I'm going to do inside of this application project is create a new folder and we're just going to call it profiles though. Profile is basically just a configuration file for automobile tunnel that it is allowed to convert between one data type to another. So I'm going to create a new dress inside of this folder. And I'm going to call it mapping profile. And then I need to introduce and I'm going to make this public of course, but I need to go to NuGet and get our altar Mapper Library. So I'm just going to right-click money. Can you get library? And then I'm going to search for ultimate author. And the one I'm interested in is ultimate Bardot extensions for dependency injection. We'll come with all the dependencies including the base ultimate PR library and anything else needed so it can be used for dependency injection accordingly. So I'm just going to go ahead and install that one. Of course, accepting and allowing any problems that come up on once that is installed. Instead of mapping profile, I'm going to let it inherit from a class called profile. And this class is phoned in auto Mapper Library. Alright, so we have the profile setup. That's good. Well the next thing that I'm going to define would be my data transfer objects. So I've created a new folder inside of this application project. And in this folder I have a few files. One, I have a common file or folder other with a common file called base detail. So similar to haul and we're sitting on board domain objects, we have a common and then where the base domain entity, which had some base properties that should be across all of these domain objects. It's the same way that I've just made based detail. And the common property would be ID. I don't need the date created and so on with the DTLs, that's just for auditing, so that's in the background. So what I really need shared across them all would be the ID. And then inside of each of these, which I haven't done those yet, I will then have other properties that I know each of them need to have in order to successfully taxi useful information for me. So this part is as easy as going to leave type and taking the properties that you know, you would want to allow your application or your users to be able to interrupt it. So that would leave requests. These are all required properties. I'm just literally, as you see, I'm just copying and pasting the hard part really was just defining the base detail and letting the inheritance put on that IDs and not details, talk to details. So you see here that leaf type is being retained. You see leaf type is being retained. You don't let the DTO know about the domain. All right, so this should be leaf type DTO, and this should also believed that DTO, and as a rule of thumb, details speeds and DTLs. Autumn upper know is what will allow the conversion between one or the DTO and the domain. So instead of mapping profile, we're going to have a constructor. So you can just save CTO tub, tub that generates our constructor. And then I'm going to have a line that says Create map. And create might be saying between a source and a destination. So T source would be, let's just say leave request would be my source. That's the domain object. And Leave request, DTO, and that would be my data transfer object. All right, so we're creating a mapping configuration between leave request and the request DTO and just close with the parentheses. And then I can extend this again and tell it that it can reverse MOP. And ultimate PR is variable for I have seen applications where people put some amount of business logic in autumn upper itself when working with complex and probably cross-domain applications, you probably want a domain or a detail to be able to convert to a domain objects on one set and a domain objects on another. So you could actually have multiple mapping configurations from one object to another. You could also create custom converters to see this member goes directly into that matching member on the other side, there are a number of things you can do with that. That could be a whole course by itself. But today we'll just keep it simple with our simple configuration requirements. So we have leave a location and then we have the matching DTO. And then finally we have leaf type. Now I want to point something out to actually alluded to it earlier, but I want to mention it again. There are times when you want to get a bit more granular or have details specific to a certain purpose. So let's look at leave requests. Leave requests detail has quite a few fields. The purpose of a DTO is to limit the number of fields. So reduce over posting and are under posting or providing too much information to the user. So what I'm trying to see is that in the event that you want to see a leave request, then all of this data becomes relevant. But then in the event that you really just need to list it or show a listing of all the leave requests. You probably don't need all this data that the user doesn't need to see all of this data when they're requesting just the list. They definitely won't need to see the comments at that point. They probably don't need to see the date action approved or canceled when they are listing. So in that case, what tends to happen is that you probably want to break it out a bit more. So sometimes you would end up with a folder inside of the details. And that folder would be specific to maybe leave request. And then inside leave requests, you have all of the details that differ. Right. So yeah, leave request detail. That's fine. This one would be for the details what well, maybe I have another details specific to listing. So you'd have requests list D2, which yes, it would have the BSD TO also. But then it has far fewer properties in it. So you probably just need leaf type the date it was requested and if it hasn't been approved or not. All right. So once again, this is all relative to whatever you think you need to provide. But I'm just seeing that in this situation, you probably don't want to put every single thing in one detail and then always use this detail every time a request is made, no matter how miniscule their request is or how little data there really need in that situation. So that is the purpose of the DTO. So actually with those changes, I don't know, I broke some code. Let's firstly, let me update the namespace on this leave requests detail. So that's it knows it's new location and then I have to update my mapping to know know that it has leave requests, list, DTO, tore a boat also. So just go ahead and update those. And then let's do a build or I build a successful. Last thing that we're going to do here is setup on AI service collection registration method. So the purpose of this is, well, in keeping with dependency injection, we want to be able to have all of our components, all of our injectable components defined instead of the application level. But then any client application can go ahead and just call the method and have it registered inside itself. When we get there, you'll see it, but I'll just do this right? And also this would be applications, services, registration. So this is going to be a public static class and it's going to have a method that I'm going to call public static void Configure Application Services, which takes a parameter of AI service collection. So we can just go ahead and include any missing references. And then inside of this we get to see services dot add, auto, mapper. Or I'd say if you're familiar with dotnet Core and dependency injection, you'd see that this is just a service called action that you would have seen in the starter file for any dotnet Core application. So the thing is that we're abstracting it to just this project so that when we have that dotnet Core application, we can just call this method. And then it would go ahead and register every service that is defined inside of that method. So we are adding auto mapper and then I'm going to say assembly dot, get executing assembly. Go ahead and add any missing references. Now if you're familiar with automatic or you worked with it before, but you're not necessarily familiar with this. You'd probably be more familiar with seeing something like type of an then would find the mapping profile and then that would allow us to register this profile has the autumn upper configuration in general. The thing with this is that for every mapping profile you have, you probably have to repeat this line. And you, based on the size of your application, you may end up with multiple mapping profiles because I've seen where you can have mapping profiles per domain object and pr, pr detail domain object peering based on the number of databases are domains that are present, right? So you are even at an application level, one application or one client application may have different Mapping requirements from another so you keep them separate. So my point is that by saying assembly dot get executing assembly, it will just traverse for every mapping profile that has that inheritance pretty much and it will just work. All right, no easier way to see it. So that is how we're registering our autumn upper in our dependency injection. And that is setting up autumn mapper in general. So if we build and it is successful, then we come back next and continue our application setup. 8. Create Queries with MediatR: Welcome back guys. In this lesson we'll be implementing two patterns that help us to promote loose coupling through our application. They are the mediator pattern and the C QRS pattern. Know the mediator pattern is seen as a behavioral pattern because it allows you to define exactly how a set of objects interact with each other. In more simple terms, you make a request and you get a result based on your request. Every time you make this request, you can expect this kind of result that is a kind of consistency that it helps you to implement. It also helps you to obstruct all of that logic associated with that request from the calling application. Now to see QRS pattern and see QRS is short for command and Query Responsibility Segregation. It helps you to separate the read and write operations from any data store. So in the name command and query command is anything that is going to augment the data, any write operation, any update operation. And a query is me reading the data. So you can always know that when you're doing something that's going to augment the data, it's a command. If you read in data it's a query. So let us get started. We're going to go over to new get firstly, and we're going to grab the mediator package, which is interestingly enough, authored by the same person who gave us all the mapper. And that is Jimmy BullGuard. So we're going to go ahead and install that. And once that is done, we're going to come back to our Configure Application Services method and we're going to make a few changes here. So some of those were oversights on my part initially, and I apologize. So we're changing this from a voice to an AI service called electron, that's one. And then we're adding these two lines. We're seeing services dot add mediator, assembly, dot gif executing assembly, just let the auto mapper and we're returning services. So for the ADD immediate term, just going to go ahead and include the missing references. And there we go. So that is what this service registration or AI service collection method should look like. Now that we've done that part, let us just backtrack and we gather our thoughts and thinkable what we need to do, we need to implement the PQRS patterns. So the CQ are spots are, and once again separates commands from quarries. In other words, we need folder for commands, we need a folder for our queries. Either Command or query will be handled by what we call a handler. So when you make a request, it gets handled and the handler is going to be either a command to augment the data or something to return data. So you see that there are quite a few moving parts at this point. And this is why this can have so many opinions on how it gets implemented. So once again, I'm not prescribing an implementation method or folder structure, but I'm just helping you to visualize it. You may change it based on your needs. But this is whole world went to approach it so much it creates a new folder inside of or obligation project, I'm calling it features. And then inside of the features, we're going to have folders per domain type. And the reason I say domain type is that the feature of the application is relative to what you're going to be doing against our domain object against the deeds of these, right? So I'm going to be doing it in chunks based on the table or tables specific features that the application offers. So let's start off with a simple one like leave types. Inside of features we're going to have leave type. I'm going to just realize this leaf types. And then inside of the leaf types folder, I'm going to have a few other folders. I'm going to have request handlers, then inside handlers on when to have commands and queries. And that is the folder structure that I'll be working with. Now, like I said, opinions differ on hold. A polar structure could look. Some implementations would actually have the top level for the dummy in the feature name. And then they would have commands and queries. And then inside of commands or queries, there is where they would have all of the requests or have a folder, sorry, per query, let's say inside query, they would have a folder for the specific query, say get leaf type list, that will be the query. So inside of that you have the good leaf type request, get leaf type RequestHandler and the BTO specific to it. So once again, there are many ways that you may see this implemented, but the concepts remain the same. You just want to make sure that you can identify commands differently from your queries and you know where your requests are. Requests are relative to a 100. So now that we have the folder structure in place, let us get started with some coding. So inside the requests, the first request that I want is to get the leaf type list. So naming convention wise, you always want to imply what the request is for or what kind of requests in the nein, right? So get leave type list and I'm going to specify requests. Alright. So get leaf type list request. And this is a public class and it is going to inherit from IO requests. So this is courtesy of mediator, and it is going to then ask, what should this request expect in return? So when I request using this datatype, what should I be getting back? Well, you shouldn't be getting back a list leaf type DTO, because once again, data transfer objects R1 will come in and what would never the domain objects. So that is our requests, though. There might be times when you may have parameters they can put in requests and so on. But this is a very simple one. I'm not going to get into any of those complications just yet. So that's our first request. Next up, we need something to handle this kind of requests. Now this is a GET request which makes it query. So inside of the handler section, I'm going to go to the queries folder and I'm going to say add a new class and I'm going to call this one the request handlers. So that is my naming convention. So I have a request by a name and then I just append handler. So we'll go ahead and add that one as a public class, which is going to inherit request handler once again, courtesy of mediator. And then I request handler says, Okay, what requests them ideally width, so its request is the GET requests. Alright, so that's the naming convention that I like to use. And more than that, do you see that naming convention a lot because then you can specifically tie our request to a handler by that similarly, one is the request, one is the request handler. So you can go ahead and include any missing references. So it says what requests that my handling and what should I be expecting to return. So the return type of the request should be the return type. You been here. So it is expected to return leave type DTO. What I listed off leaf diabetes. All right, so now that that's done, we still have our red line. This red line is because we need to implement the interface. So almost automatically, we actually get a method here that says handled. It gets the request as the parameter. And then here is where we write all of the code to handle whenever this request comes in. Now before we start handling any request here we have a few dependencies. One, we need to be able to talk to the database. Now we haven't implemented anything, but we do have our persistence contracts. So these are contracts that will actually talk to the database. So we're still not directly interacting with the domain here at all. All right, so I need to inject the missing references. So using S3 or tub, tub, I'm going to get constructor and then I'm going to inject and I leave type repository. So I'm just going to go ahead and add any missing references. And then having typed it in the constructor, I can know, say create and assign a field. All right, there we go. And I tend to use the underscore because I like to see the underscore. So I know that this is definitely the private field in the class, so that is my naming convention. You don't necessarily have cause you see the auto-generated code didn't give you an underscore, so that's fine. Now another thing that I would want to inject is I mapper, because I do need to do some mapping when I get the data from the database, when this repository returns the domain objects, I need to convert them to details to send the buck. So I need automaker also, so I am upper, include missing references also go ahead and injected or creates NSA in the field. And there we go. So we have everything injected and ready. Now to implement this handler code, what I'm going to do is run a query against the leaf type repository. So I'm going to say var leaf fetch is equal to 08, leave therapy repository dot get all. This is a waiting. So that means this method needs to be asynchronous. And then you'll see that the one error goes away at least. And then what we need to return is the list of leaf type ETL. So I'm going to say return mapper dot map. And then this is where this comes in handy into leaf type detail, but it's a list of leaf type D2 or other. And what I will be mapping into that will be the list of domain objects coming from the query. And there you go. Everybody's happy. So ever request saying I want the leaf type lists, then we have the handler saying, Okay, I can handle this request for you. So that is what we mean by defining application behavior or defining how objects relate to each other. This request will always relate to this hunter. I believe if you try it out, multiple handlers for the same requests, you would actually run into runtime errors because of some ambiguity. So that is how we get to clearly define that when you made this request, you can always expect the behavior. Now let's look at another request and it's still going to be a query. But this time I want a spear. Leaf type. So I was saying that you may see requests or you may end up having fields or properties inside of her requests that will be needed in order to handle the specific operation. So what if we wanted to get elif type, not the whole list, but leaf type detail. So I'm going to create another question. And this one is going to be good. Leave type detail requests. Once again, remember to make it public. Now in this situation, I'm going to add up property and I'm going to say id. So we're getting the detail which means we want a specific record. And the best way to specify a record is by sending over the ID. So we need a handler for this request. So I'm going to add that handler and the other class or gets the type detail requests 100. There we go, make it public. No, one step I missed inside of the requests I needed to inherit from I requests. And it is only expecting a leaf type DTO one this time, right? So many requests DTO, go ahead and meet at all missing references. There we go. So the handler is now going to inherit from I request handler and go ahead and add missing references. It is going to be handling requests for good Leave type detail requests, and it is expected to return type ETL. So once all of those missing references have gone anything, go ahead and implement the interface, and then we start off with that. Now we're going to need basically the same injections that we had to use for. I'm just going to expedite that and copy and paste that code and just change the name of the constructor and go ahead and add all missing references. And once you have done all of that, you can get started on the specific code. So in this situation I would say var leave type is equal to and we are with the method call. So leave repository dot get and gets his expecting an ID. So I have the request object here inside of the portal handled methods. I can now say request dot i e. All right, again, where are we teaching? So this spacing, but I'm just showing you that that is how you would go about using those fields in your query, in your generic or near repository, sorry. So then when you have specific methods defined inside of leaf type repository, then you can have more specific fields for your requests that are needed to handle those kinds of operations. So once we get that, the next thing we need to do is return the leaf type DTO objects. So I'll say mapper, mapped into leaf type detail and we're mapping the type. So once again, only details. Now one thing to note is that in handlers who have a clear separation between the commands and the queries. But in requests there is no clear separation. So when requests start getting created for the commands, we have to rely on our eyes. We have two OK. We're not tired so that we don't get confused as to which request is for water. And I don't think that that's, it's very efficient. So I'm going to have commands inside of my requests, and I'm also going to have queries. So I know that whenever I want any requests, sorrow, and inquiries, I know exactly where to go. And the same for commands. So like I said, they are differing folder structure. Some people once again would have just said leave types. And then they would have had the commands. And then instead of the commands that would have the different commands with all of the resources needed for that specific command, the detail, the handler, the request, all in that sub folder. So as you break this odorless that you're going to need more files, more folders because you want to see a clear separation as much as possible. Once again, told this is implemented is relative to the creator. So I'm just going to move my queries that I've created so far into that specific folder. And of course, after update that namespace. And once I do that, I'm going to go ahead and update my handlers to know the new location of their corresponding requests. So with all of that, dawn, I'm going to do a build and build was successful and you see how it's all coming together. So if you want, you can go ahead and implement the other features we started with leaf types. You can at least implement the different, the two read or the two query operations relative to leave allocation and relative to leave request. 9. Finishing up Queries for MediatR: Welcome back guys who are continuing on the same vein of setting up our queries for our additional features. So we already went through doing it for leaf types. And the assignment was to do it for leave requests and leave allocations. So I've already done that and I'll just be walking you through. So if you haven't completed it, I'll go slowly enough and explain everything that's I'm doing so that you can replicate it. And if not, then we can just compare notes and feel free to let me know if you have done anything differently from how I have done it. So the first one, Let's start with leave allocations. So I have the queries and I have the, what I have the handlers and the requests, and I have the queries in either one. Now following the standard so far you'd see that or quests look fairly similar to what they looked like for a leaf types. We have the get leave allocation detailed request, which takes the ID. And I also have one for the list. Alright, so in the query is, I have a slight surprise for you. I don't know if you did it this way, but I'll walk you through exactly what has been done differently. So of course we're including the correct hello, The leave allocation repository rather as opposed to the leaf type repository because we're dealing with Levi locations right. Now in terms of our handling, you'd see that I have a method here that we did not go through in our, in our repository setup. I have get's leave allocation with details. Know the purpose of me specifying this is that when we get the details of the leaf allocation and if you need a reminder or leave allocation actually has a navigation property of leaf type, which means if I wanted to show What leave has, what number of b's after, show the name of that leaf icon, show the ID that's useless, right? Or the client or other control the id, they would need some amount of details of the leaf type on their side. So when I say with details, the purpose of that method is to do that include include that navigation property and everything so that by the time I get but that leave allocation, I would have all of that logic don't formula are ready and all I have to do is map and return. So at this point I'm just going to pause and 0 that you would see design pattern sometimes where they actually do all of that complex logic right here in the handler. So I've seen where they would actually directly connect to the context here in the handler. And then do all of those raw queries right here in the handler and then massages and then finally return what they need to return. So that's one design pattern you might see. And the other one is where we abstract away all of those logic and complex queries and business logic. And then mythos for the repository. No, I'm not saying one is better than the other because I had the approach has its pros or cons in terms of making methods that specifically deal with a scenario like getting the leaf allocation with the details. In terms of that, it could be good to put that in our repository, a method where you have a direct reference to where that logic art back kind of operation is happening because you may need to reuse that same kind of logic in multiple handlers. So that would actually reduce the repetition of the same kinds of queries and same kinds of inclusions and everything across multiple handlers. If you just have it in one repository space. The downside to repositories, however, is that because of how specific things can get? Yes, we have the generic stuff, but obviously this is not a generic command that every single feature our domain object me, use the same way. We have to have a specific method and then we're going to probably end up down a rabbit hole of having many specific methods in our repository. So that's one of the don't sites, the repository pattern know, like I said, if you do everything here in the handler, it would work. But once again, the downsides of that is you might need to repeat that kind of operation across multiple handlers. And then you end up repeating code in multiple places. And then modification might become difficult in the long run. So that's the surprise Alpha. So if you take a look at our interface, you'd see here that's actually have both for the list and for the individual. Now the jury's out on whether you need both. The reason I included the 14 and the list, of course, is that in the list, the leave allocations, I want to show the leave name and the d, The leave name and the dead leaf type name and the dy. So I would need to include the details here. And then if you're viewing one of them, you would want to see the details also. So I've just done that and I did the same thing. And when you're working with a lot of us, you need to navigate quickly. So I'm just going to use this blue arrow up top here to synchronize store this document is quickly and then leave requests. You see that I have similar methods because in the leave requests we need to know what kind of leave is being requested. And if I'm viewing the list, you'd want to see this leaf type was requested on this date, et cetera. Alright. Now, in terms of our leave requests, I have another slide surprise and I hope that you guys caught on and did this already. But the queries look fairly the same. But in terms the requests rather, but in terms of the return type, you'd notice that I have get leave request detail request showing the leave requests DTO, however, for the list and we discussed why we would separate the details earlier. We have leave requests list detail. So the list is determining a list of the details specialized for the list. And then the details is returning the detail with much more detail in it. For the queries. Similar concerns for the leave requests list, request handler. Of course it is returning the list of leave requests DTO. And the handler looks fairly similar words calling a delete request repository and calling the get leave requests with details similarity. When we're looking at the detail handler, it's the same thing except we're returning just the leaf requests detail, the lab setting this up, you might have noticed that you are getting a lot of red lines when you're types mismatched. And if you had the wrong data type here or the wrong would request type being referenced, everything would break. So it's very, very strict. If you told me that the request is expecting leave type request DTO, you cannot put any other datatype here in that handler that you're telling should deal with that request. Alright, so you can just be very, very careful about that. And once you get used to the pattern, these things will come more naturally. I can only assign if it's a bit frustrating initially. But once again, you have the code to reference so you can always just pause when you need to and replicate these bits of code in your application accordingly. Now in our next lesson, we're going to look at setting up our first command. And the command once again would augment the data in the database. So we're going to look at what it takes to setup, create command for any one of our domain objects. 10. Create Commands with MediatR: Welcome back guys. In this lesson we're going to be taking a look at setting up commands. So commands once again augment the data and we're going to be starting off with or create command. Now if you're looking at my Solution Explorer, you would notice that I have a few more files and I have a restructured D2L folder. So let me just collapse everything else so you can focus on that section. So in the early stages of planning out this kind of architecture, you're always going to be changing things around because it's good to get it right from null as opposed to later on when you have many different files and any more, for instance, the updates when you move, these other files are owned. So in the early stages of setting up the details, I had indicated that you probably would want to have a folder for the details and then have the different types of details in there. So we did that with the requests where we had the leave requests D2 and delete requests list detail instead leave request folder. So I've just extended that concept to the other details where I have a leave allocation folder and elif type folder and name this one leaf that deep detail. Let me just correct that. So inside of these folders, we're going to have the different details. And once again, it is not absolutely necessary for it to have multiple details or details specific to a purpose. But I'm going to show you why it is beneficial to do that sometimes. So with leaf type, the risk is low. We have leaf detail. All it requires is a name and the default number of days. That to me, that's simply enough. This data runs very little risk of over our underexposing anything for any of the operations we can create easily using this detail, we can list easily and we can look at the details. It's very simple. However, with leave requests, we had discussed that leave requests DTO has far more details in it. In a list setting, we don't need that many details. You really just need the details of the leaf type requested, the date it was requested, and if it's approved. Now in the Create, we once again don't need to ask for the consumer to provide all of those details. Because if you compare create tooth leave requests, we really don't need the date option. And when somebody is creating a leave request, when I am asking for a leave, there is no date actions. So that doesn't need to be provided. The date requested that can be put in by us, by the system. We don't need to get that from the user or from the consumer. The start and end date short. Those are essential. We need to know the leaf type. We don't need to know the entire object of the leaf type. We only need the identifier of the one that is being requested. And at that point, it's neither approved nor canceled. So I can have a specific detail and as I'm talking, I'm seeing that I have extra fusing this, but I have a create leave requests DTO, which is going to have the start date, the end date, the identifier for the leaf die being requested. And once again, the system can see the date requested. And I need to make this actually a nullable. So I'm going to use this opportunity to adjust this. So this needs to be nullable, but I don't need the sender in the Creates. I take it all together, create in the leave request, I'm going to make it nullable because deduction rule really mean when was it approved or canceled. Right? And then similarly, I'm just going to do this right down to the domain. Once again, it's good to see these things early and adjust them because we made the database as yet, so we don't have to worry about this ripping up too much. All right? So that is why I have a specific DTO for creating the leave request, different from listing, different from the details. In another, when we have the leave allocation and deallocation has a number of these, the details of the leaf type that is related to it, identifier and the period. It's similar to the leave request. I don't need the details of the leaf type at the type of time of creation. So when we talk about over posting, it means that we are giving the client application the opportunity to provide too much data. And then this is where hacking and bad data and people introducing anomalies in your database, that's how that happens. So this is a good way to restrict what can happen on an operation. So know that. Have a good understanding of why we have more details introduced. Let us take a look at the commands. So I'm going to start off with the easiest one, which is the leaf types commands. I'm going to go to request and I'm going to create a new class. And this Gaumont is going to be very specific create leaf type request. So that is the request to create a leaf type and went to go ahead and add it and make it a public class. And then this one is inheriting like we know from I request. And it will return, I'm going to make it return an integer 0. This integer will be. The name or sorry, the ID of the value or the record has been created. Alright, so you create, we're just going to tell you the ID of what you create it. We don't know what the client may need to do after creation, but we're giving them but the ID saved, they need to go to the details page afterwards. That's up to them, but we're telling you it was successful. Here's the ID for the new record. Now in the command, what we're going to have, as we know my null is our matching handler for this request. So we inherit from I request handler. And I request handler is going to be implementing the create leaf type request. Go ahead and include anything that is missing. And it will see it's returning an integer. Now when we implement the interface, you'll see that we get that task returning an integer and our handler. Now let's jump back over to their requests for a bit. And there are few patterns that you might see in the request for creation. One pattern is that people would actually write all the fields that when you are about to create a record for this year, sending over the requests to create record for that type. They would put in the fields in the actual request to see these are the fees that you're allowed to send with the request. And then in that moment, that request basically serves the purpose of my detail that I just defined here. No, I personally, I don't like to mix and match. I don't like to have details here, but then when it's I create, I have the fields here. And then, you know, I don't know where to go and I need to change what goes sometimes I don't remember. Instead, I keep everything at a detail level and the request is really just a mechanism to transfer the data. So the DTO represents the data and the request is just that transportation. With all that said, I'm going to introduce our property in this request for leave type DTO. So this property is what the consumer will fill with the request and send it over. So in our command we know that we say request dot leaf diabetes, that is where the data is. So of course, if we're going to be interacting with leaf types and auto mapper, then we will need our usual suspects in our constructor to be injected. And for speed again, those go back to one of the existing handlers and just copy and paste, change the constructor name. And then we just include any missing references. All right, to work smarter, not harder. So now that we have all the tools we need, how do we set up our handler to create this record? But before I move on, I'm going to make a slight adjustment. So one other pattern that you'll see, and I think I'm going to use it this time just because it's for me, it's cleaner instead of requests. So when we're dealing with commands, no, it's not going to be our request because I request is when you're asking for something, that command is when you're telling something. You want a command. So instead of us using request in the name here where we say create leaf type command. All right, So by renaming the source, I can load the IntelliSense to rename it across the board, but I will rename the file manually. So we have create leaf type commands, so we know for sure that this one is a command and it's obviously the handler is supposed to be carrying all its command. So Command Enter, alright, and update any other references are on the place. So I think that is actually cleaner and easier on the eyes. It's a distinguished between what I requests classes on what our command classes solve for our operation. What we're going to do is define the leaf type to be equal to. And then we're just going to use auto mapper to map into leave types. So we're mapping from the detail to the domain object this time our own. So mapper Dartmouth leaf type, interleaved tap our looking at request dot leaf type BTO. Alright, so you just go ahead and include anything that's missing. There we go. So now we say leave type is equal to get me the mopped version of that into the domain object. Then we carry out our, this is going to say leave type is equal to and our weights on we're going to call our leaf type repository dot add method blocks, right? So remember that it is actually returning the object or something off the same data type, right? So leave type is now going to be updated because after Entity Framework, which is what we're going to be using as our RM. After it does its operation is going to update the ID. So we're returning the object with updated ID and know that we have that ID. We can say return, leave type, dot ID. There we go. So know that command has been handled. Nolan, we look at it, we see that it was really not that complicated to do. It's just three lines where getting the request with the data. And then we're going to map it into the domain objects. We're going to then bosses or within a repository or for the operation to occur. And then we're just returning the idea and that is it for the create leaf type command. So you can actually just pause here and asked him to do it with the others. I'm going to do it and then we can compare notes. Alright, so I've skipped ahead and I've gone ahead and implemented the commands that create commands for the other two domain objects. So if you look at it carefully, you see that it's pretty much the same quote apart from the names and the repositories involved, it's pretty much the same code. So this is the leave allocation command handler. You can pause, you can replicate it if you need to. But that is our creates handler and our career quest. Just taking that create leave allegation detail instead of the leave allegation detail. All right. Similarly for the leave request, same code, same structure, right? And then the command, it's just taking the leave requests detail to create the request detail. So you see that all of these kind of look very similar. So you can go ahead and replicate those in your code. And when we come back, we will look at the other commands which would include update and delete. 11. Finishing up Commands with MediatR: All right guys, welcome back. As you noticed, my screen is blank. In this lesson, we're going to be talking about adding the update and delete handlers and all of the assets that go with them. So my screen is blank because I've added quite a few assets and I'm going to go through them one by one with you and you can go ahead and replicate them. But I want us to understand the decisions that we're making at this point as they are critical decisions. And this kind of architecture, the things you put in there, all decision based. Once again, they are relative to what you're working on, your team and the overall application on what you hope to accomplish. So let's start off by looking at the details. Where are they? Went through the details and we kind of separated them by folders so that we can see all the details relative to a domain type quite easily. But then at this point, we had one VT0 for leaf type, and we had said that, okay, that was low-risk because it could be used for many different operations. I have since broken it out into two. And the reason for this is you have created leaf tab2 different from leaf type D2 and all. Why? Because leaf type detail inherits from the base detail and base D2, remember gives us the ID, know the risk of this. And once again, purpose of the detail is to kind of reduce what we'll call over posting. If I am creating a leaf type, I don't need an ID. I don't want to give the consumer the ability to provide an ID because that would just give problems. I'm giving them exactly what they need to get through their operation. So I created a new detail for leaf type where they only get name and default deeds. They can't provide an ID field. Leaf type detail can be used for everything else, for editing, for viewing because well, it has those fields as well as BCE, which is the ID. Now, as you may end up getting more and more details and you have feels repeating across the different details, then you may want to consider having a common folder inside of that sub folder already to kind of define this property is that this detail must have anyway. So any leaf tab2 that will come up with most hafnium and it must have default days. You could define a base class if you need it to. Alright, now we can move on to the leave allocations have three leave allocation details at this point, and I'm going to point out a mistake I made in one of them. This mistake was to have creates leave allocation detail inheriting from this detail. All right, so you have to be careful about these things if we're trying to be strict. So that was a mistake on my part. Create leave allocation detail once again should have absolutely no reference to any primary key whatsoever. But then we do need the other fields. The updates, however, does need access to the base DTO for our updating purposes. Now, it could be that we said, okay, well, once again, create that base detail that everybody inherits from. Or we could just bite the bullet and say we'll use 14 create or update if the ID is provided and we know it's an update if it's not prevented and we assume is I create, those could easily be arguments I'm not seeing, no, I'm not seeing yes. It depends on how granular you want to get and these are your decisions to make based on your context. I'm just pointing out the differences and the risks accordingly. So I am going to be breaking them out like that. No For leave request goes through even another level. So we do have the leave request detail where familiar with that. We have the list detail which already established why that is different. We have the Create which once again is incorrectly, at least in my context inheriting from this, which I'm going to characterize null. And then I have to I have updated the Request detail different from changed leave request approval detail? No, you're going to ask. Okay. So why have both? Because both are talking about augmenting the data. All right. Well, once again, it's a matter of host tricked you want to be so the update serves the purpose of allowing the user or the consumer to allow the end user to update their requests. They wanted to maybe change the start dates or the end it maybe they chose vacation when this should have chosen sick. Maybe they want to put in new comments or they just want to cancel it. So I'm giving them exactly what they can do using this DTO if they send their request to update. These are the only things that can be updated. However, change leave approval. Request, sorry, leave request approval detail, sorry, would have only the ability to change. It's approved DSR null. So the approver is really just saying yes or no. I mean, if you had more fields like additional commands and anything that you could add them into DTO. Of course, they would have to be presented the domain objects. But my point is that I am using the detail here to help me enforce certain business rules and certain behaviors that my application is capable off. And, and once again, I have seen situations where there's one flat D teal and decisions are made in the 100 that based on the data present, maybe in the request are in the detail. This is the kind of operation to be carried out. So it depends on how much you want to break ETL to have that level of granularity. So you know, if it's an approval requested, know exactly which handler to go to. Or you can have one big command handler that is taking the data and a bunch of if statements to see if it's, if the ID is present, then if it is not present, then it, But then if it is present and this flag is not null, then assume it's an approval. You could do it that way, but I'm not doing it that way. I prefer to know that when it's an approval request, I have that change approval command. If it's an update, it's an update command and know exactly where to go to do what. So now that we have that detail explanation out of the way, I'm going to jump over to the features, folders in futures, in leave allocations and leave types at least I have already defined the new handlers and their commands. So let's take a look at the simpler one. So we have the update leave type command. And this command for the leaf type is just taking our requests are inheriting from Eric quest as we know, but then it's returning what we'll call a unit. So if you just hover over, you need to see that it's a mediator provided construct that represents nothing like void, right? No contents. So in API design, when you do an update, you'd usually return at 202, if I'm not mistaken, which is short for no contents. So it means it was successful, but I have nothing to show you. That's basically what unit is. And then we're taking that leaf type DTO as the property. I haven't implemented the commands just yet in the handlers, so we can do that together. I'm just showing that we have that same construct for the update leave allocation command, where we're taking the update leave allocation detail and returning that unit. And we have that command defined accordingly, also empty. So we're going to do those two together because those are pretty simple. And then we're going to spend some time exploring the leave requests and the business rules there. Now the workflow for our handler, and I'm going to start off with the leaf type command handler. All right, the workflow here is that we need to want to retrieve the original record to update that origin are required and then three, send it all to the database and then return the unit, or at least something to say it was successful. So two approaches. One, you can use the repository to have that specific method because then in the case of the leave requests, there might be specific methods. And then once again, you're going down that rabbit toy off. Having very, very specific method's in every single repository along the line. Or you can just do most of the work inside a handler, which is what is for anyway. Alright. So I'm just going to say var leave type is equal to leave type repository, Git. And then we're looking in the request, looking in the leaf to be looking at the ID. Now, another design pattern that I have seen is when it's an update leaf type command, they would use the regular leaf type, let's say the one that does not have the ID, but then put the ID property inside the request or the command object so that we, when you are making the updated to include the ID inside of this object. And then the detail is just itself. There are so many ways to do this. And once again, the ones you understand what you're doing, you can do me the best implementation based on your context. Here. Leave type is going to be retrieved based on the ID off the payload coming in with the request. And then we can see mapper not MOP. And then mapper dot map is pretty much just going to, in this situation, get the request thought leaf type D2. And notice that I'm not specifying, I need to type this time. I'm using parentheses, mapper dot map, open parenthesis. And then I'm seeing this is the source of the data. And I want leaf type that just gained from the database to be the destination of the data. And I'm getting that green line because I failed TO it on the line. I apologize. So there we go. So mapper dot map and then it seeing please just updates whatever is on the right with whatever is on the left, that might be different whether it's different or not, this, please update it because the update is going to send over. And that's why our detail needs to have as many fields that much the data, the domain object itself as possible. It's good. Dava name is going to have default days. We don't know what has changed, so that's why we're seeing autumn upper, just update all the values. If that Blanco the name, then we're going to update it to a blank name. Hopefully they didn't. However, if they didn't change name, then the expectation is that the same name is going to come back. So once again, we can't account for what might've been chin. So we're just seeing, please update the leaf type with the corresponding values coming from the object in the left. And then we're going to weight our neve repository update. And then this is where we send over our leaf type post mapping or sending it over to the Update. And then we just return unit dot value. And that's it. That is our update operation. Well, I have this red line up here because I have int and not unit. So after me that change, everything is okay. So that is it for obesity in the leaf type or what? The leaf type. So I'm actually just going to copy this. I'm not going to give myself too much work. I'm going to jump over to update leave allocation. And the only thing that we're going to be doing differently here is that instead of using the leaf type repository, I'm going to be using leave allocation repository. Instead of seeing leaf type DTO, I'm seeing leave allocation BTO. And instead of calling the object leaf type, I'm calling it leave allocation, everything else is pretty much standard and pretty much the same. And this will handle our update request. Alright, so I've gone ahead and completed the leave request commands for updating lacO, we see no that it's not that complicated or so I've gone ahead and done the update leave request command and corresponding handler. So our leave requests command handler taken from the command returning unit. And it is just knowing the difference once again, is the repository being used, but it is pretty much the same operation. Now one thing that you may want to consider is the specific business rules, our own this kind of operation. So yes, the code, it looks the same, but then when updating a leave request, there might be other things that need to happen. Right. So in the case of the change, request, change and leave request approval where we only get approved or not. We also need to set the action that we also need to probably update some other things. So I think you will be good if we had a specific function in our repository to handle the change request. Now the cool thing about requests is that we don't necessarily have to have our request per hour requests per all the time. I could actually reuse the same request and handler to handle this kind of operation. So let's look at this situation. So inside of the update, leave, not leave allocation, apologies, inside leave request command, we could have the leave requests DTO, but I could also have a property of type change leave request approval DTO. So this request is capable of having either one of these objects. Now in the handler, I can make a decision and call the appropriate method based on this because it's still an update command, right? So inside of this update command handler, I can put in a bit more logic. I can see if request dot leave request detail is not equal to null, then this is the route I wish to take. I can see else. And I'm going to speed it very explicit with this else because I don't know if maybe in the future I may have some other business rules. So I'm just going to settle for these two explicit situations where it's either that the this should be else-if apologies, else, if else if request dot change, request approval D2 is not equal to null, then do that, right? And then overtime was going to try and routine one returns. So I'm going to take all of this and say you carry out this operation when the leave requests detail is not equal to null. So that means whoever is consuming whoever's interrupting with this handler sending over their request needs to ensure that the fill. The appropriate fields according to what they want. Some just showing a different flavors because he could go the RequestHandler peering for every single situation, every single scenario, but then you can kind of bulk them together, like I said, the handler. Some people put a lot of business logic in this section. So once again, that's up to you. So if the change leave request approval DTO is not equal to null, we have a decision to make. What exactly are we going to do? We know we have retrieved and leave requests one way or the other. We're not going to do the comprehensive mapping our really want to do is eventually called the updates boats with other things happening. So I'm going to call awaits, leave requests, repository, change approval status. No. What what do we do? I could give it the ID. I could give it the leave request object, as well as the status that it should be changed to. There are a number of approaches that we can take. No. We always talk about repeating code, right? So let's say I do this, I go and fetch the leave requests. And then this would be relative to the change, the request approval details ID. And then I can say give this operation the leave requests that was retrieved as well. The value from the detail field for the approval, whether it's approved or not. All right. No, I'm repeating this call to get a leave request. No, I can only make this call relative to if it was the request the TOR changed leave requests detail that came in because if this is known, then I can't get that ID. And if this is null, so after make a decision. So you see all these other things? No. Okay, Let's refactor. So to make our life easier, we are going to go back to the command. And then I went to say, well, no, I see good reason to till the consumer include the ID. The ID must be present in these either way, but do include the ID here. So when you include the ID there, I can easily say request dot ID from outside of the if block. And then I know I have the leave requests that needs to be modified. Now, if it is that the whole detail GameOver, then fine. I know exactly what to do if it is just the change request. I don't have to go and find it specifically, but then I have it already and I can just pass in the approval status. So this method can get implemented in our I'll leave repository. And I'm just going to make it void. So it's just that task. And the parameters would be DV, request and a boolean. Well, it has to be nullable boolean of approval status. All right, so then when we call that, we're just passing in those two commands appropriately. So this is one way of writing the code. I'm sure you're probably sitting there and saying, okay, we'll probably could've done it this way or I need to do it this way for my situation, that's fine. But as long as we understand the flexibility that we have when it comes to that whole request handler pipeline and how we can handle different scenarios. We can use the one handler to handle the different potential scenarios based on the request, based on the data provided in the request. And, you know, you don't have to have peering for every single scenario that you might have, but you can have a handler to cover multiple scenarios. Alright, so now that you have the hang of it, I want you to go ahead and implement the delete commons and handlers. So I've gone ahead and done that. And we have the Delete leaf type command, which is inheriting from IO requests. So notice I don't have I request and datatype last time we use the units. So this is just the 0 that you don't necessarily have to put on that data type if it's not meant to be returning anything. So this delete command will have a parameter for the ID. And then inside of the handler, all we're doing is retrieving the appropriately based on the ID, and then we're sending it over to the delete and then we're just returning unit. And at this point that is the general theme for the mall. So it's the same thing for the leave request. Know whether or not you're going to actually expose functionality to delete a leave request. That's entirely up to your business rules because it might be that there is no hard delete, it's only a soft delete or delete really means cancel, right? So that would just it would just flag it that it was canceled, disregarded, but keep the record. So I'm just showing you how to put in the functionality. But then once again, the business rules and the application thereof are relative to your situation. So that is it for setting up the delete and update handlers for our domain objects. And pretty much that's the gist of the whole mediator pattern coupled with the C QR is sputtered. So as we go along, what we're going to explore is making this a bit more bulletproof because right? Know anything can happen, anybody can come and create anything, and there are no real rules to govern what is valid versus what is invalid. So we'll try and look at that whole we handle invalid data coming in and how we can make it a bit universal and foolproof. 12. Adding Validation: All right guys, welcome back. In this activity we'll be setting up validation for our details and our commands. Now before we move on, there's a quick carts on us I wanted to make for the code I wrote in the update leave requests command. I had inadvertently used the leave requests detail. So if it's in the update, then you should be adopted leaf Request detail. So if you caught that error and you cart did it to yourself, then kudos to you. If not, then you can go ahead and meet. This changes me. No problem. Now what we are talking about when we, on the topic of validation is the ability to make sure that the data that we're receiving before it goes into the database is well valid because as it stands, there is nothing here to stop us from committing, invalidate a toy database. And one thing that is very, very, very important is data integrity. So you don't want to create two records with vital data missing leave allocation. We don't know what leave type. It is still flat at that, so you'll want to validate it and then rejected of course, if it doesn't meet the standards. Now, when you're if you're used to MVC and do you think TBL validation and you're seeing that on the models we could easily put our data annotations, which is very true. I have found this to be useful. But then when you want to extend beyond the default ones, you have to start building old extensions and salt, which is, which is also very good. But then in this particular program, we're going to be using fluid validations, which is a library that allows us to use the fluent syntax and billowed very, very powerful rules and validation structures are owned our properties in our classes. Alright, so to get started, we're going to jump over to the new Git and I assume pretty searched for fluent. And you see all of these wonderful search results popping up. No point of information. The documentation for this library is very good and you can find it on the website for invalidation dotnet, and you'll see how we can extend this and massage it and use it to its full capacity to help with their validation needs. So I'm going to go ahead and install this library for the dependency injection extensions. And once that is done, we can close NuGet and then gets over to our setup. But one thing to note, I don't think I've mentioned this before. When you click on those CSB file, you actually see which packages are installed on their version. So alternatively, if you know the exact package name and version you want, you can actually paste aligned like this inside of your CSV file blob build and it will automatically get that package from NuGet for you. So that's a way you can also install, solve these packages going forward. So let us start with our validators. Know I wondering where to put these validators, I'm just going to collapse everything. So we can see all of our folders kind of compressed, alright? And then of course we have the details. The details are where our validations need to happen because they are the ones stuck seeing the inflammation on our behalf. We don't need to validate the ones that are being used for the queries. That's kind of useless, right? Because the read operations don't need validations. The right operations or the augmenting operations, however, do. So I'm going to, in one of the folders, let me start off with the easier one. I'm going to add and new folder. And I'm going to just call it validators. And then inside of this, I will add a new class. And then this one is going to be create leaf type D2 validator. Of course, to make it public. And then I'm going to let it inherit from an abstract validator. And then I'm going to pass in the name of the exact class that it is relative to. So I'll just go ahead and include any missing references. And then it adds that using fluent validations library and then we're ready to go. So what we have is a constructor. So CTR tub, tub and we get that constructor and then we can start defining rules. So let me just take a quick look at the create leaf type D2. What would we want to validate on this? Well, would want to make sure that the name values provided right. We also probably can limit the number of characters that this name property can have. You know, most of them didn't know, and it must have a maximum length. And it would probably want custom messages for situation for the default number of days, we can probably see that it's hostile, be more, more than one, it must be greater than 0, at least for the default number of days. So there are a number of things we could validator. So what we'll see is rule four. And then you'll notice that this looks just let the same lambda expressions that we're used to. And if you're not quite used to them, that's fine. We'll get used to them eventually. So it's rule for and then we can see name. And the cool thing about Flint validation is that you can chain things along. So you can chain it's along and say, Okay, this rule and that rule and that rule and that rule. So the rule for P dot name. Is, let's say, not empty. So that means it must have a value. And then I can say with message, so if it comes over m2, then we want this message to be printed. So I can do something like this property name so that we don't want to talk to. Type name must be your name is required, right? Because then I'm making dynamic. If we change the name in the class itself, then we may forget to update the message accordingly. So by just doing this, it will automatically inherit whatever name or whatever the name is off the property, that it is really too. Right. So we'll say property name is required. That's our validation message. Another validation we could put is not null. So we're letting you know that this should not be null. And I put my semicolon prematurely, so terrible that so it should also not be null. Let's see what those we can have. We can also say that the maximum length of the name property is maybe 50. There should be no leave type with a name that exceeds 50, right? And then we can add another with message to that with the message. All right, so I'm just giving you ideas. I mean, you may have different I'm requirements for your validation than I do, but these are general guidelines they can follow. So let's look at the, the p dot default is. So for default is you'd notice know that because the datatype is different, some of the validations might not necessarily apply. So I can't talk about maximum length with the default is that has nothing to do with an integer, right? You see that the arrows are gone, so I'm seeing it must be present, but then it's integer, so it will always, pretty much it will always be present, but we can leave that alone. But it shouldn't be empty. Then it can never be null. Really intuitive because integers are defaults 0 and null values provided. But then we did see That's it must always be greater than 0. And then I'm sure that there is a less than. So let's say we want to see in the system there should be no leaf type that is Putin that has any number of these Greta than a 100 or up to a 100. And it must be at least one. So the different datatypes can get different rules and we can chain them alone as necessary. We can put our messages accordingly so I can put my message here with message here. So with those validations in place on the leaf type DTO, let us see how we can go about making sure that these validations our own. So in the command that creates the leaf type, right, we do get that leaf, that BTO from our command object, right? What I'm going to do before I even do the mapping though, because I don't want to waste resources on an operation with invalid data. So I'm going to do the validation first. So I'm going to say var validator is equal to New Leaf type DTO validator. So while it's still a detail before I tried to even map it over to the domain type, I'm going to invoke that validator that I'm going to say var validation result is equal to. And I'll await my validator making the call to validate. And we have the async option, hence using the await. And then we pass in the object to be validated, which will be a request dots leave type DTO. At this point, the validation result is going to either have arrows are not. So I'm going to see if validation results dot is valid, so we get that it's either valid or not based on the rules. It's going to be valid or not. If it is. Let's say if it is not valid. And then for readability, I wanted to say if is valid is equal to false, right? Because sometimes in all fairness, when we just use exclamation sign, sometimes when you're tired, emit even miss it when you're reviewing the code and so on. So I'm just going to be a very explicit if is valid is equivalent to false, then I'm going to simply throw a new exception. So if you're used to exception handling, exception is thrown, that's it, basically crashes. The program will know later on we'll look at better exception handling and all of that can actually help us to write a bit cleaner code. Instead of helping a bunch of if statements to check a bunch of things, we just have exceptions. That's our being through and strategically to help the flow of the application. So in this situation, if it is not valid, then we're just going to throw an exception. We're going to look at how we can make custom exceptions also, which can be handled different from actual fetal exceptions. So that's pretty much it. We're adding validation here to make sure that it doesn't go any further. It doesn't go anywhere near the database. We don't want it to become a domain object if it is not valid. Now let's try probably the most complicated one. So I'm going to challenge you to settle validators for the leave allocation, which is really not that different. You just need to make sure that the number of days is present on It's more than 0. The leaf type ID cannot be null, it has to be greater than 0 also. And then you could be extended to make sure that the leaf type ID exists in the system. Because if somebody tries to spoof and Central Valley type ID that doesn't exist in our table, then that is also a validation arrow that we can actually catch before we tried to commit the database. Coming to the database, we can do that for the period. So I'm going to challenge you to do that one, but we're going to work on the leave requests together because this one is going to have a few more things in it. So once again, I'm going to add a new folder validators, and then let us start off with the Creates leave requests, DTO validation. What do we need to validate? Well, our dates need to be valid dates. We still need to get the leaf type ID, so we're going to do that together. And well that is optional, so that's fine. So let's go into that one. So it starts off with our class creates leave requests detail validator. I make it public. And then I am inheriting from the abstract validator relative to our Create leave requests detail. All right, so I've already written some of the quote for you and we're going to go through it. It's not completed because I want us to do certain parts together, but rule for start need to be less than the end. It's now we saw that we could also use the scalar value here. So I could have put an integer, but then an integer would be an incompatible comparison with the time. So I could have put date time, no. Right. Just make sure that the start date is not before today are mostly before today, which is not necessarily the case, right? So based on the business rule, you may want to compare accordingly, but this business rule states that the start dates must always be less than the end date. And then will the message I can say property name must be Before comparison value. So in our create leaf type detail, we had hard-coded the 50, we hard coded the, the one and the 0 here, but we could easily replace them and he spoke on parson correctly. We could easily replace those areas with comparison. I'll leave the one hardcoded for null, but I'm just showing you your options, right? So I'm seeing that start date must be before NDI it similarity for India it, it must be greater than the start neat and property name and comparison value. Alright? No, relative to the leaf type ID, I did say that our validation could take a number of forms. One, you want to make sure it is greater than 0, okay, fine. And all the more important one though, would be that you want to make sure it exists. Now if we just check if it exists, even if they sent over 0, 0 would never exist as a leaf type ID in the database. So we could chain it's along because then we could waste that database called by just doing that. So I could say greater than. And then I'm going to see 0. And then I'm also going to see it must exist. So I'm using what you call a delegate here, and I'm going to just erase all of this and retype it from scratch so you can see exactly what they're going to say most async, open and closed parentheses, it's a seeing, so we're going to await, but then we're going to define a delegate. Sorry, we're not that we're using, we're letting it know it's an async delegate which takes some parameters. In this case, we need the ID, which is the value. So we're taking that value as a parameter number 1. And then Tolkien is a cancellation token as parameter tool. And then I'm using a Lambda arrow to then define some object block or method block. So this method block is where we will carry out that check to see if it exists. Now you're probably wondering, okay, that means we need a database called how exactly do I call the database from just a validator? The cool thing about this is that it allows us to directly inject or dependencies. Alright, so we can continue by injecting our repository so we know how to injector it will put it in the constructor. We can use control dot to initialize the field. I already switched over to my underscores, which is optional of course. But using that we can inset of this most acing delegate function check for the existence of the leaf I repository. So two things to note here are three things. One, we're injecting the validator allows us to inject other dependencies like our repositories. That's one tool. We can actually have a custom function doing customized validation for someone to type this IN for it from scratch. So here we see a dot. Most I would have muslin the async. That's fine. So most async. And then because we're using async, we have to let the delegate model, it's erasing. So the delegate is when to take two parameters. Id representing the very ID that we're validating or the value we're validating. And tolkien would be representative of the cancellation token that we use are lambda R0 and then open and close curly braces. Then inside of these curly braces we have our logic. So the first line of our logic is to check or repository if the leaf type exists and then return that does not exist. All right, no, this function I just created, so I just extended our generic repository to have a method that returns a Boolean. It's called exists and it takes int id. So this, you can add that to the generic repository and you can use it across every one of them. But the point is that we can know, use that to check if anything exists in a particular table. And in this situation, it's a nice shoe in to check if or elif type ID exists. So know that you're equipped with hold to handle that leaf type ID. I'm going to challenge you to go and setup validators for the leave allocation. Hit pause, take a few moments, set up the validators Ford leave allocation and any other detail that we haven't looked at just yet. And then when you come back a more, just show you another way that we can refactor our code to kind of reduce on the repetition. Alright, so I hope that you actually took the advice that you went off, you tried to do to yourself and that you had some amount of success. That's good. But then I want to show you just another level that we can do this. So back when we were setting up the details, we kind of realized that would be ending up repeating our properties across multiple details. For instance, the create leaf type D2 and the leaf type detail, they actually have the same properties bar the font. Not one relies on the ID which we serve up through the base detail. So the validation rules for both of them will actually be the same except maybe the one that says the ID1 to the hub of validation for it. So because one has an ID, one does not, and the validator so far have been very strongly typed because when we have the create leaf type detail, it's only four to create leaf type detail and updates, D2 would have to have its own validator. So what I've done is to extend it a bit. And this is what they call pin driven development. It means do what you can until it is no longer practical, then you refactor, right? So when you're applying these solid principles, sometimes you don't see it right off the bat. But then at a certain point you realize that this is getting tedious or this is not practical, this is not in keeping with the principle. And so you refactor your code to get the most out of the principal at that point. So at this point, we're analyzing that we're having the same validation rules split across multiple files, which is fine. Or at least having multiple files for validations is fine. But having the same rules repeated can be dangerous because then if the rule needs to be changed in one, we could change it in one I'm missing the other. We know that risk. So what I have done is to have an interface which is an obstruction of our fields. All right, so like leave type detail I've created, I leave that detail and this has the fields that we know the leaf tab2 needs to have. So in the leaf type DTO, I have related inherit. I leave that detail. So these two fields are just the implementations of what has been defined in the interface. Just the same way in lift IB2, while it does inherit from base detail, it also inherits from elif type detail. So leave that BTO would have the ID as well as the properties coming over from our interface. Now. Okay, So, so the next step is that we can create an I leave type detail validator, meaning I am validating against the interface. So my rules are no longer directly applied to the leaf type DTO. They could be, that's fine. But then like we saw, we have to have multiple because we'd have to have one full leaf tab2 on 14, the Create. So instead I can set up validations against the obstruction. Both details actually inherit from the obstruction. So these rules will apply to both of them. And then when I have to get customized, I have my leaf, that detail validator in which I say create leaf. Of detail validator and everything that we know already. But then in the constructor I'm simply calling an include method. So this is fluent API is we are following us to have validators that apply to other things, apply to another class. So this really applies to the interface, what I'm seeing when I am doing this create leaf type detail validator, include the rules from the I leave type detail validator. And then I can have my costume methods also. So in the update detail, I could have the same kind of syntax, but then it's the updates, which means I also need our rule for. And then I can say I need a rule for the ID fields. I can say p dot. And then see it's giving me all of the properties, including ID because it's against that type. So my validation for the ID is that it should not be null and it should come with a message property name must be present because of course, when you're updating, you need to send the ID of the record that you're updating, which is why we do need that validation rule for the update and which is why I would have to have the separate file for the update. But this is much cleaner because at least we don't have to repeat the rules for the name and the type or the name. And the default is, sorry, we don't have to repeat that across both by the data. So you will see that actually did that already for the leave request and for the leaf allocation, once again, for the leave request, I have I leave requests DTO, and it's the same code that we just looked at when we did the leave requests detail valid data with injection. And in this case we have to initialize it and then we do the rules. But when we look in the Creates leave request detail validator, we see that we have to do the injection. So we still have to do our injection and we have to initialize it. And then we pass over that injection into the include method because of course, the IO request detail validator needs to have that injection. So Kant does call it we have to provide that value for the constructor. So it's just that daisy chain, but I think that this is much cleaner either way. And we don't have to end up repeating all of these rules right across the board. So you'll notice that both the create and update look very similar, except the fact that the update has that additional rule for the ID. And just for completion, we have the eye allocation details. I do have the ID tills, the interfaces didn't show that. So anything that is common across everybody, I just put that in the interface and then anything else can be put directly into the detail as needed and validated accordingly. But then for create an update, those are all the things that are we really need. We don't have to necessarily weekend sense of validation rule sort of request comments maybe limit the length. We don't necessarily have to do anything for cancelled. Once again, I'm just giving you the guidelines. Your business rules and requirements may be different, but you set up your validations as needed. So our update leave requests detail inherits from the base, and I leave requests detail. We don't have to do that for the list because we're not validating the list. We're not validating the detail detail either, but that creates definitely has to inherit. And then the change request approval that takes from this detail, but we're not quite at some point we can validate this. I'm not prioritizing it. But then over update and create, we definitely need to have already, all right, no, For our create a location, so I leave allocation and then both creates an update inherit from I leave allocation. So I'm just going to jump over to the I leave allocation validator where we have rules for the number of these. So this one is simple and as application grows, business rules change. We can easily put our validation here without modifying the custom queries and any other customers operations around said business rules. So rule for a number of days right now I just have it must be greater than 0. And my validation messages were a victim of some copying and pasting. So I'm just seeing property name must be greater than comparison value for the greater than rule for the period. So the period really should be the year, right? So for the period the year 2020, these were the number of these that you got. That's the pole points of the leaf allocation table in case that wasn't explained earlier. So the rule for a period is that it must be greater than or equal to the time dot 10 dot year. And we can bolster this a bit more about for now, we'll just use that. So we'll see message property name must be after this year. And then we all saw and wrote together the validation rule for the leaf type ID and no for the Create leave allocation detail valid data where simply injecting the leaf die repository and initializing it and passing it over in our include method. And for the updates were doing the same, except we also have that rule for the ID. So that is really it to validation. Yes, It took a while to get there. There were some reflectors along the way, but I'm sure you can see how it's all coming together to kind of reduce one repetition across multiple files and to kind of help too, keep everything structured. So one consequence of the following, the solid principles of course, is that you're going to end up with many more files which we discussed earlier. But it is coming together nicely and helping us to reduce how many times we place the same thing in multiple places. 13. Adding Custom Exceptions and Response Objects: Hey guys, welcome back. Last time we were here, we were sitting up our validations for handlers and for our various details. And in a nutshell, we realized that we needed to put in some rules so that whenever we get the create leaf type command with the create leaf type D2 or whatever detail. We can run it against the validator and then we would return an exception if it is not valid. So we should've done that across all the handlers for update and create anything that needs a validation should have minimum these lines. So I'll just click through and you can just go ahead and copy in case you didn't finish that up. So this is for the update. We just looked at the create for the leaf type update fully requests pretty much all of them look like the same thing. All right, they're all validating and entering an exception. Now, I want to talk about custom exceptions and MRD responses, right? Because at the end of the day right now, all we're doing is throwing an exception. An exception can be thrown based on our throwing it manually. It multi can also be thrown because of something else. He could be a database redirect problem, it could be something else, right? So it's always good that the consuming application or whatever is calling the handler has a good idea as to what through this exception. So the cool thing about exceptions is that you can extend them. So the base data type for an exception is the exception that we'll throw in here what we're going to create our own for the specific purposes. So we're going to start off by creating a new folder in our project. We're going to call it exceptions. And in it we're going to have bad request exceptions, not found exception and validation exception. So you can go ahead and create that folder. And these three files, remember to me them public, of course. And what we're going to do is let each one of them inherit from application exceptions. So exception is the base type of an application exception is used as a base type for application defined exceptions art. So we're just going to go ahead and let each one of our classes inherit from that butter requested exception will later on when we want to define that the requests that was central versus bad, alright, but for now we're going to initialize it or read the code to wire it up. So all of them will have a constructor. And for this one, the constructor is going to take string message and then our base has to inherit the same message that is passed the base being our obligation extension. So that's what that exception looks like. Application exception rather. Sorry about that. No, Moving on. We can do the same thing for our NAACP phone, but then we can be a bit more explicit with certain things. For instance, if we're going to be seeing not found, we will probably want to see the name of what was being sought after and maybe the key value. All right, So we're passing in Name on keyboard, then the base requires string, so it has three overloads. Want to pass in a string here. So we can just pass in our message that we know we want to print. And my message is going to say that the name, whatever it is and its key was not found. All right, so if the search for something, it's not formed through not found exception, for the validation exception. We're going to get a bit more fancy. So the validation exception is going to want to R, we're going to want it to return the list of all the things that were wrong with the request or the data that will send over in her quest, right? So I'm going to have a list of string and I'm going to call it errors, right? And then in the constructor, we're going to have the validation results, the data being passed in. So validation results of validation result is coming from fluent validation. So we'll just pass that whole object in there it is using fluent validation results. All right, and then we can see we can initialize or errors. I mean, just initialize that here. And then we can see for each validation error in the errors. Or I can just shorten it and say for each error in validation result dot errors, we want to add that error. So I'm just going to say urls.py. And then we just error dot and we have an error message. There we go. So that error message would be whatever we had setup in our validators as the message to be returned when it is not valid. So now we have our custom exceptions. We're really going to focus on the validation exception, right nodal. And so in our handlers we can actually update this from throw new exception to throw new validation exception. So if the validation result is not valid, we throw this new exception and we pass in our validation results and you want to include any missing references. So you see here it's asking for these books. We know we have it defined. In our custom exceptions. All right, so you can go ahead and update each line that was previously just throwing a new exception to know throw that validation exception. And please remember each time that we're including our custom exception and not the fluid validation or the data validation on art. So go ahead and update them all and make sure you're including the cart library. Now at this point, I can imagine that you're wondering, Okay, So how can I use the other exceptions? Well, let's look at the not found exception. So in C, the delete operation, we have to find the record, then do the delete and then return well, the units, right, but then what if we don't find that record? Well, then that's a perfect opportunity where we say if the object that we're looking for returns null, or if the operation returns null, then we throw the not found exception. And what would we pass into the not found exception? Remember that's it takes two parameters. It takes the name and the key, so we could easily say name off. And then this is a nice way to keep it strongly typed. So we're looking for a leaf type. So name off leaf types or say in the leaf type with the id that was passed in was not phone or other requests. Dot ID, right. There we go. All right, so then you can start decorating your Delete handlers with that one line. So in the case of the leave requests, that will be the same thing except we're checking if the leave request is null, and then this would be a leave request not phoned. Go ahead and update. So you can do that with leave allocation also. All right, so once you're all done with that, then you have settled some nice exception handling at least or custom exception handling in your handless. Now another thing that we wanted to look at is customer responses. So what happens when there's a positive result? And even when there's a negative results, right? So this is where your, your architectural needs may differ from mine in terms of what you want to do. But here's a concept. We can define custom response types or have base responses where we can return data based on the situation. So if it fails, we could throw the exception sure. Or we could have a customer response that has a success false flag, it contains the validation errors. And so the client will always know that I am expecting a response of this data type that we should always have this data. If the flag is false, that I know IT field, if it is true that I know it pass. So we're going to look at something like that. So kind of an alternative to just throwing an exception or returning just the ID, what we could do is define a new folder. So we have a new folder here called responses and inits we have a file-based command response. And it will have three properties. Success, which is a Boolean message, which is a string, and a list of errors, should we need to send back the errors, right, so then after we have this base command response, we can extend it to facilitate the specific operations. So for instance, one result will probably want to extend this is that we want to return certainty to each time a leaf type is created or updated, right? So the base response might not be enough. So we can create a custom response associated with leaf types. So we already have those web requests. Then you could create another folder called responses, and then we could extend that. I'm not going to get that complicated. However, what I'll do is just add another property here and I'll call it ID, right? Or we can call it record ID. So that means whenever something happens, we create, instead of just returning the id, what I could do is, and this is a huge change. So let's go through it line by line and I'll address the red lines as we get there. So initially we were just saying get me the validation results, throw an exception, otherwise continue and then return the lever. Leave. In this case, leave request ID. Alright? Know what I'm doing is I'm seeing firstly, initialize the response, so we have a base response, that's fine. Then I'm seeing if the validation result is false, set those spawns thought success to falls. The message you can put in a custom message if you wish. And then the errors we would like to fill with the same validation errors. So I'm just selecting them from the list of errors are from this collection of errors. So this select has a red line because I need an extra library, which is System.out links. So just make sure that we could see that together. And then it just gets the error messages, puts them in a list, and then that goes in. So that's a nice one liner itself the for each loop right? Then later known, we see if the ad was successful and soul, what happens is that if this feels unexceptional would've been through an automatically by Entity Framework anyway. So if this fails to happen, then we get an exception. So it would never get this far if this was not successful. So then the success is true. Our response is creation successful, and then we set the ID. So that's what I'll see you in that across the board, we're only returning IDs, right? We were only returning the ID for the newly created record. You could have a requirement where you need to return the entire record. At that point, you could just extend base command response creates a new class called it creates leave request response. And that it didn't hurt from this and give it the DTO parameter for a leave request DTO, you do your mapping, you return it. Like I said, I'm not going to get that complicated, right? No, we can't look at that in our future considerations or additional considerations lesson, but I just wanted to get this concept of a customer response across. So then we return the response. Now this has our red line because we had defined our command to return int. Alright, so we can go over to our command. Let it know that I request is supposed to return the beast command response. All right, and then our error will appear and we jump buck and were bucking or handler, we see that we can return response as soon as we also update the handler, right? So remember that we have that command and return type peer-to-peer. So let me just update that. And then finally the type on the task for the handle. Alright, So there, so when you want to change, alter your return divs, that's all you have to make all those changes once again, right? No bs command response. So if you want to get granular, I keep on saying you don't have to get as granular book based on your requirements, you will need to are you may not need to. I am not making it a requirement to gold and create command responses for every single solitary handler or command handler that I have. Right now, I'm just going to be using the base response. And I'm also going to be making that change to the leave request for now at least so that you get to see the idea of how you can have custom exceptions and, or how you can improve on that top of your custom responses. 14. Additional Refactoring and Considerations: Hey guys, welcome back. This is more of a review session than a coding session and just some additional considerations. So throats or activities may have mentioned that you have alternatives and you always have alternatives. Whether these alternatives are good or bad or quote unquote, best practice sometimes is relative to what you are doing and what you need to accomplish. That being said, of course, there are foundational principles that you want to adhere to regardless and the ones who will have those as your guide, then you are more than likely going to make better decisions. So one thing that we want to look at is the separation of concerns, right? So separation of concerns led us to have multiple projects and far more files than we probably had in other projects as I'm ensemble fact, I think that we already have more files in this one project than we did in the entire application that we're both to rebuild, right? So just in this project alone, we had multiple DTLs. Why did we have multiple details? Well, one we wanted that kind of separation because there might be business rules that govern what can occur on any type of operation. So let's look at me leave requests details. Leave requests details. We had one for the listing which had only the data that was absolutely necessary for when we need to serve up the list of leave requests. We have a DTO that has all of the fields that match what's in the table. And this could be seen as the detailed DTO. We also had the update, which only had a few fields required for an update operation. We had the Creates where we had few fields for a create operation, et cetera. So we split them out into multiple files. So that is one of the I wanted to say consequences of adhering to those separation of concerns principles. But you're going to end up with far more fuzzy than you're probably used to. And you're going to have to separate them in a way that you can always find them. So I started off with details then in details, I separated them by the type. And then after that then you start seeing the files will also set up the validators. And then because of the different validation rules that might be required for the different DTUs, we have multiple validates, a soil one for Create, we have 14 update. But then at the same time we saw where we needed to kind of consolidate because it was kind of overbearing and we started repeating ourselves. So you have the DRY principle. Because of the DRY principle, what we did was we created an interface that had the base fields. So let me look at maybe leave allocation will be a better example of that. So we have, I'll leave allocation, which has all the fields that are necessary for annual leave allocation. Then we have our details inheriting from this interface, right? So yes, you see the implementation again or you see the fields up here. And again, what they're really being driven by this requirement of the inheritance from the interface. And then we could set up one validator against the interface. So all the common fields, right, settled by the interface are being validated. And then the other validators are just including the validations and then implementing their custom validations as needed. That helps us to reduce the repetition of code. And once again, as your project grows, you run the risk of forgetting to update one part of your project when you make a change somewhere. Know another thing to note in our validator, initially, initializations in our handlers, and I don't think I pointed that out earlier, someone to make sure I do know is that when we're initializing these validators, we have to pass in an object of the type that it's expecting, right? Because remember that this validator needs the I leave type repository and then we have the constructor seeing I need the elif type repository. So there's no way to instantiate this without passing it in. I like we have done here. So if you had directed line and this entire time, I apologize, I overlook that one, but you can go ahead and do that because what you need to do is injected into the handler and then having injected it into the a 100 and you just pass it along similar to how, when we need it to include the base validator, we had to do the same thing. All right, this is the same principle, dependency injection. All right, so I hope that cleared up an error and if you didn't have that error, then kudos to you. Now another thing that we neglected and we can address know is the inclusion of all the details that are required for any mapping operation to be successful. Right now we only have mappings for the like, for like request the teal to the list are these are the only the domain to the detail detail. So of course we will need, once we're doing mapping, like in this handler, we are mapping from the Creates leave allocation detail to leave allocations. So that means we have to have representation of it inside of our mapping profile. Here I've added the additional mappings, all of those for leave requests that I like to group them so that they don't get mixed up all over the place. And all the idea of grouping is a can probably just create our reason, our own the sections so that you know exactly where which ones are, which set starts or town you can call up such application grows. You may want to do something like that. Alright, so that is another thing that we definitely need to address before we move on. One of the thing that I want to 0 star talk about is our folder structure. So I had mentioned multiple times that your folder structure may differ based on your temperament or your Outlook or your visual. And four hold these files need to be arranged, right? So I like to think of see QRS or the implementation of the seahorse, our own scenarios. And then all scenarios, we have particular assets that are required. So when I say is a scenario, I mean, creating a leave allocation, that is a scenario what is required to create and evaluation? You need your command handler. You'll also need your command object and you need maybe some other details. So you may want to create and you just include, you may want to create a folder inside of leave allocations that has maybe instead of features you would have created leave allocation. And then you have your handlers and all of your assets inside of that one folder. So the folder structure can differ as long as the organization is such that you can find your assets when you need them, then you are on the right path. Now while I am in the Creates leave allocation command, another thing that you could consider, so you see there are so many considerations, this is not set in stone, right? Another thing you could consider other is that inside of the command, it is a well-known pattern to just use the command object for your fields to carry out the commands. Instead of having a whole DTO inside of the command, you could actually put the fields from the detail inside of the command. And then you just validate the command itself, the request itself, the object coming over into the handler would just be this object. You wouldn't have to say request dot, leave allocation, dto dot this field, you just say request dot this field, that field, etc. So there are a number of options available to you, but I'm going to stick to the details with all that said and done by way of conclusion, the core or the application project as we have it, contains the core functionality of the application. So you see that everything is abstract at this point. We're going to move on to the next module where we're start putting in some real quotes, some meat into the repositories and any other logic that is applicable. We looked at how the mediator pattern works that promotes loose coupling coupled with the C QRS pattern where we can know exactly which file is doing what. And this handler is going to handle this behavior, this scenario. And we can expect this particular response because of that kind of relationship or behavior that we have implemented. We also looked at features based layout, which in my opinion helps you to see that, okay, the feature related to leave types, you can find everything inside of that. And that to me that helps with the layout. You may have other ideas, but this is my recommendation. And then we looked at validation using Fluent API or fluent validation. All right, so those are the things that we've looked at in this module. So when we come back, we will definitely kick it up a gear and start putting in some more functionality. 15. Section Overview: Hey guys, welcome back. In this lesson, we're going to start our module where we set up our infrastructure layer that you're probably wondering, okay, what is the infrastructure layer? Well, one, it's going to sit inside of this folder called infrastructure. And two, it is the project in which we're going to actually implement all of our obstructions that were defined in the core section. So we'll be setting up our database contexts that we'll be using Entity Framework Core as our ORM to communicate with our database under our application. But in this layer, you setup everything is settled. The logging you, we are going to implement the repositories we're going to put in the actual meat, the application. So let us get started. So let's create two new class libraries. One called HR dot DV management infrastructure and one called HR dot leave management thought persistence. Know the persistence project or persistence layer, will deal with our communication with our database. So this is where our database context or EF Core libraries and references, all of that. Was it there instead of infrastructure, that's where our implementations will sit. All right, so between these two will be implementing the repository. We'll be setting up any other third party implementations that are needed. And we'll also set up services to be bootstrapped into the services call it AI services colleagues Sean or the dependency injection container for ASP.net Core. Remember that libraries need to be done. It's thundered. 2.1. And you already went through creating all the collapse of diversity can follow the same steps. And when we come back, we'll start off with setting up into different work core in or persistence layer. 16. Adding Entity Framework Core: All right, so we're back and we're going to be setting up our persistence projects. So let us get started by adding a reference to our application projects. So we have the domain project which has all of the entities. And end application project has a reference to the domain project. So our persistence project is going to have a reference to our application project starts so we can just click application to equal k. And then that's one dependency. Don't we're also going to go over to NuGet packages. We're going to look for Entity Framework, but the one that we're going to be getting is entity Framework, Core dot SQL Server. So this one when it comes down, will come with all of the dependencies that we need. So we can just go ahead and install the latest stable version. And still in you get, let us just go ahead and search for configuration extensions and install Microsoft dot extension's not options dot configuration extension. So this will come in handy when we are going to be sitting on some of our stuff. So after we've done all of that, we can go ahead and create a new class. And I called mine the age, our lead management BB contexts. So create that new class. You can call it something else. You can probably just call it leave management, EB contexts or DVI contexts, that's fine. But it will be inheriting from BB contexts now DVI contexts comments to us courtesy off microsoft dot Entity Framework Core. So we can go ahead and make that reference. And then we can make our DB context aware of the different entities that we had defined. So if you're familiar with EF Core, then you know exactly what I mean. So in this file we have a few things. We have a constructor where we initialize our DB context to have a parameter of DVI contexts, options off its own type, and the name is Options, and we pass that on to the base, which is DVI contexts. Then we have our DB sits relative to our different entities. So I can just go ahead and include the missing references for those and everything should be all green. Next up, we're going to override a few methods. So the first one that we're overriding would be the on model create team. Actually it's quicker to just start typing over it and then you press Space and then you'll see all of the options. So we're overriding the on model creating. So this method gets executed whenever the database is being generated, right? So we can sit up certain rules. So the rule that we're going to setup here for sure, at least, right? No. If we wanted to see the database, we can do it from here, but we're not ready for seeding as at least not yet. We want to apply configurations from assembly. And then we're going to say type of DV management, DV context. And I'm going to say dot assembly. Alright? So that is all we're putting in our model creating at least, right? No. Like I said, if we wanted to seed the database with spatial configurations for tables, then we could always inside of this method so they get applied whenever it is generated in the database model. All right, another thing though that we want to override foreshore is our Save Changes. Someone to choose to save changes with the cancellation Tolkien as its parameter. And I'm going to outfit this save changes with some beautiful code, some nice handy code that's going to allow us to do some audit logging automatically. So remember that we had set up a base entity for each of these that came automatically with like a user-created, are created by or other, creates a data, et cetera. So I'm going to set the foreach loop to go through each entry in change structure dot entries. And we're just doing an implicit data cost based domain entity. And I can just go ahead and include the using statement for it. And then for each one, what I want to do is set the date, added, date, modified at all time. So I can always see once you are about to see some change, I want entry dot entity, dot glass modified by our Last-Modified data, rather, we're dealing with the date, right? No. Time. No. Right. And then I went to do a check and I'll just say if the entry state is equivalent to entities, State DOT added, meaning it's beats being added, it's a new record. Then we would want to sit the created it to datetime dot null. So we allow is our date created rather, so we'll always set the Last-Modified. It wants some things being changed. A voltage we're sitting modified it. But then only when it's being added to, we set the created it and that's what that's like. The most basic of basic code to implement auditing that you may ever find. Once again, this is automated, so every time we hit Save changes, it does all of this. And then it just caused the base c of g and g is method in the background. So that's it for adding into the framework to our persistence layer. When we come back, we will start working on some implementations. 17. Implementing Persistence Layer: Welcome back guys. In this lesson we're going to be implementing our persistence layer. So when I talk about implementing the persistence layer, I'm specifically referring to our generic repository, right? So we only have the abstraction, but we have no code to back this up. So let's go ahead and do that. So we add a new folder. And I'm going to call this folder repositories. And then in this folder we're going to add a class that will represent the implementation of the generic repository relative to type T. Cells degenerate in the class. We as usual make it public, and then we make it relative to t0. And then it inherits from IJ generic repository, which is also relative to T where T colon class. All right, go ahead and include any missing references and then allow it to implement the interface. So I'll go ahead and write the quote and then we'll go through it together. Now before we continue, I realized that I got over Zillow, Switzerland for copying and pasting phone. We're sitting up the interface. So just as some cartoons for the update and the delete, we can remove the T, right? We don't talk to return anything when we do an update or delete. So those two should only be tasks, so you can go ahead and make that change. And then that of course, will affect our implementation. So our generic repository starts off with a constructor that is accepting a parameter of type leave management DVI contexts. Of course, the DB context is basically our connection to the database. So we do need it in our repository in order to carry out our operations or add method. It starts off with our weighting a DB contexts call to add a sink where it just passes in entity EF Core is intelligent enough to infer what entity is being passed in relative to all the DB sets that have been defined in our DV context. And by db sit I mean these. So whatever datatype is passed in, it will know if it is one of the DB sets That's it recognizes. So we'll just go ahead and add the entity, save the changes on the new return, that entity. For the Delete, once again, remove the type parameter from task. But all it's going to do is look and DB context, find the set relative to t0 that it is being given and it's going to remove that entity from that sit. All right, and then after that it saves changes. So you'll notice that nothing really happens until you see if changes. This is that final commit to the database. We have the exists method where we get an ID for our record. And what I'll do is look for the entity using the local GET method, which we'll look at in a few. And then we return that it is not equal to null. So when it's not equal to null, then yes, it exists otherwise that's false. Of course, in the ghetto, what we're doing is we're returning an unweighted call to DB context dot set t. So once again, we're looking into specific set and we're finding the record relative to the ID. All right, for our I read on the list t, which is gets all that's what it's returning. All we're doing is looking in the set and we're doing a two list async on that set. So we're just getting everything from that set and sending it to list and returning that in our guitar for the updates, what we're doing is setting the entry into the state to modify it so that the EF Core, we'll start tracking it and then we go ahead and save changes. So that is pretty much it for our implementation for the generic repository. All right, so now that we have our generic repository implemented, we now need to implement our specific repositories so we can go ahead and add those. So starting with the leaf type repository, we're going to look at what the implementation looks like. So I've created the leaf type repository. It is a public class called leaf type repository, inheriting from the implementation of the generic repository relative to the leaf type. All right, and then we go ahead and say it is also to inherit from the elif type repository. So the red lines here indicate a few things. One, we need to bring in the missing namespace to, we need to put in the actual implementation this, this swan, right, this interface didn't have any additional methods, so that's fine for null. And then this is complaining because we need to have the DB context present. For the generic repository. So remember that when you're inheriting something that has a dependency, you have to put that dependency in the inheritor also. So that's a simple solution. We just get the constructor and then do our dependency injection. And that is for the DB context, of course, but then we also need to let the bass note that it can also use this DB context that is being injected. And that's it for the leaf type repository. So the contract didn't have any additional methods. There's nothing extra to implement. And because it's inheriting from the generic repository relative to the leaf type. By using this implementation, we have access to all of the methods that were defined here. Now let's look at some more complicated ones. So let's look at leave requests. Leave requests repository without this seem dependency requirements. So we have to make sure that we inject the DB context possible to the bees. Go ahead and include any missing namespaces, and then we have to implement the interface. So this interface actually had a few methods, extra. We had change approval status, we had get leave requests with details, and we add another one to get leave requests with details by ID. Few things are happening in this particular one. So the implementations here will be different from the generic ones because these are specific to some leave requests related operations. So the implementations are as follows. For the change approval request, we're getting a parameter of leave requests and approval status. We're setting the approved state of the lever quest to whatever value it came over and the parameter. And then we're setting db contexts not entry leave requests. So you see this time is not an entity, it's not generic. It is very specific because we're in this specific repository. So we're setting the entry state or the entity state, modify it for that leave request, and then we save changes so it will start tracking it and then see if the change accordingly. Now you see that this is a very specific operation as opposed to the general update where we can't account for what is being changed. So we just set everything to modify it and allow it to truck. Once again, these are just ideas because maybe your business rules are far more complicated or the operation you have to carry out inside of the repository is far more complicated than just sitting one field. So you may need a specialized function for that. Now for the get's leave requests with details method, all we're really doing is querying the leave requests table. So var leave requests is equal to await DVI contexts not leave requests. And then we're including the leaf type. So we don't necessarily always want to include the leaf type. We don't know under what circumstances we may need it. So we have this one relative to the details that are needed alongside the record for the regular generic one that is not including is just returning data from the table. So this time we're including details. It could be this leaf type, it could be more as many includes As you may need. And then we're pushing them all to lists. And then we're returning for the final one where we're only getting one leave requests with details based on the ID. We're doing something similar, except we're doing the include and then dx in the first or default were the queue ID matches the ID Boston. Now we're doing first our default here as opposed to the find, a sink that we did in the generic repository because of how these methods work, you can't do an include when you do a find, it just doesn't work. So when you have to do an include, you have to use the first or the single or default, whichever one you feel more comfortable with. And then we return the leave request that has been found. So we can jump over to leave allocation and we see that it's a relatively similar implementation. We have similar methods. So I put these methods in here just to point out that you can have custom methods in these repositories. You may not necessarily need them in your application, use them as you need them. I also retain that red line because in case you're getting into all you need is to include that reference, that EF Core and then that is good. All right, so we have one major activity left and then we're done with the persistence layer. And that is to set up the registration class for the persistence. So just local application services, Hadar distribution. Persistence will definitely need one. So go ahead and add this new class. I'm calling persistent services or distribution. It should be a public static class. And then in it we will have. A method that returns I serve as collection and we're calling it configure persistent services. So of course, as usual, go ahead and install any or either other Any using statements that are needed. And then inside of this method, what we're going to do is write code to wander. You Store DB context, and add our repositories tool, the service collection. And this is that full method of, of course, what the red lines because always love to show you what Redlands me a exist and how you can solve them. But this is what this method needs to look like. So you can go ahead and start including in the using statements that are missing. So firstly, we have the services.js ADB context where we pass in the type that is the db contexts leave management DB context and options dot SQL Server. So remember the DB contexts we told it that we need to, in the DB context on model creating, we told it that it would be sorry, in the constructor, right? We took BB contexts options. So those options are really coming from whatever options are defined here in there, a distribution. So we go ahead and use EF Core for the use SQL Server configuration is coming in because we are going to be passing over the configuration from the client application that is implementing this called the services bootstrapper. And so we will definitely need to pass. That's a no conflict. I configuration needs to be Microsoft extensions dot configuration, not autumn upper. So be very careful with that one. So we can go ahead and include that namespace. And then for depositories, we have add scoped. So we're adding all of them are scoped. But then for the generic, notice that we're adding type of ij generic repository with our angle brackets comma type of generic repository, angle brackets. But then every other one is the interface to the implementation peer without any angle brackets. But I'll go ahead and add any missing references here. And with that, and oh, I'm sorry, I wrote the code wrongly here for this one, add scoped is open parentheses and no angle brackets for the ad schooled. All right, so let me just cart that so you can see. There we go. So it's scoped parenthesis, type of eye generic repository with angle brackets, comma type of generic repository with angle brackets. I don't once again, those are in parentheses while the others will have the angle brackets surrounding the interface and implementation peering. After all of that, of course we return services. So just to backtrack a bit to what this connection string will be, we don't have an app settings yet, so this is going to live in the settings of the application that will be calling this method. And so when it is calling configure persistent services, he is expected to possible with that configuration object, which will then give access to the app settings. And we will be able to get that connection string to possible What's our DB context? So that is what that whole line or that whole section is doing. Now the purpose of force using scoped, there are three injection models available to us. We have at school to have singleton and add transient. Singleton means at one instance of this service will exist, throw the entire application. This can be dangerous based on the nature of the service. And it could be useful, for instance, maybe a logging service that could be one in sensor or the entire application. You probably don't want something like that for your database transactions. Because then when you have multiple database transactions, you kind of want those to happen in silos. So that's why it would use scoped, which means that for the lifetime of our quest, I am requesting something. I am writing to the database, I am calling your service for the lifetime of that operation. Then add scoped will invoke and connection to the database or an instance of this leaf type repository or any one of these repositories, as it is called on during our request. Once I request is finished, it will no longer be in memory. So that reduces the chances of conflict. And then add transient means that every single time it will always do something new, which means that you might end up with more than you really need to complete a request, which could also lead to conflict. So once again, you can use them sparingly, but within this context, add scoped is the one that we want for our database related operations. Now with all of that done, let us do a quick bill to ensure that we are right on track and that we have no errors and we have successfully built our projects. So when we come back, we'll start implementing or infrastructure. 18. Add Infrastructure Project (Email Service): Welcome back guys. In this lesson, we'll be setting up our infrastructure projects now our infrastructure project is pretty much where the implementations for all our third-party services will sit. In this lesson, we'll be implementing an email service with the help of SendGrid. And we're going to be setting it up in the infrastructure project. Now the first thing that we need to do is set up an abstraction for the e-mail senders, same way that we had obstructions for our repositories. We have an obstruction our contract for the email service. Now, on that note, I was looking back and I realize that the folder structure here is not very intuitive for the long run. So I went off to refactor here because I have persistence then I have contracts. But contracts is the more universal term because then you have contracts for the persistence in their contracts for the infrastructure layer and so on. So I'm going to have to flip. These folders are owned, which won't be too hard on this called persistence folder contracts. And then I'm going to call contracts persistence. Now this is going to have a ripple effect for all of the namespace is because we do want our namespaces to be completely representative of what they really are. So we have persistence dot contracts, no, we'll have to change that to contracts thought persistence. And then this is going to miss with all of the namespace references. I'm sorry, terrible that. But it's a factor that is definitely required for us to have intuitive folder structure, right? So after doing all of that, if we do a build, it will point out all of the butt namespaces that were referenced throughout our project. So, you know, quick way to go ahead and fix those. But namespaces would be to look for persistence dot contracts everywhere and then replace it with contracts dot br system. So you can just do that and meticulously go through and replace each term does to me sure you don't override anything that might be important. And once you have exhausted that search, you can go ahead and do another bill just to make sure that you no longer have any build errors. And once that is done, I'm just going to close all tubs and that were opened in that operation and then we can go again. So in contracts we want a new folder, our calling this one infrastructure. And then inside this folder we're going to add a new contract or interface. And I'm calling it I email sender, know I email sender is going to be public interface with a method that is going to be called send email. And it will take a parameter of type e-mail called email. Now we need to define what this email looks like. So I'm going to create another folder for models, right? So let me just add that folder. We're calling it models. And then inside this model's folder, we're adding a new class that we're calling email. So this is going to be our template, our model for what any email should look like. And then this e-mail will have the typical properties of any e-mail that to the subject and the body, all string properties. So now that we have that model defined, we can go ahead and add the using statement and have that sorted out. Before we move on from our model definitions, we have another model that we need, and this one is going to be for the e-mail settings. All right, so it's going to hose properties for an API key from address and the from name, all of which are string. Now that we have a few things fresh though with our email sender, you're probably wondering, okay, why do I need e-mail? So then puts it into context because we just started building of the email service with no real contexts. When somebody applies for leaf or maybe their approval statuses change something like that, you'd want to notify them that this action has taken place. No weirdo actions take place well relative to features, we have our handlers. So when you create a new live allocation, okay, maybe you don't need to send an e-mail, but then a leave request would warrant an email to be sent whenever one is created or one is updated. All right, so let us look at the Create leave request handler where we can simply inject or I email sender. And I'll just do that using the IntelliSense. Quickly, initialize the field. And then given our naming convention, I'll just rename this using the underscore inside our handler after everything has been successful, before we return and kill the whole operation. So the handler is still going on up onto this line. So will appear an email object and then try to send it off. And we'll deal with an exception. So let's look at this slowly. So var email is equal to a new email, which is our model that we just defined. We have the two, I just put in a fictional email there when we get to the whole user authentication and having actual users submit. And we'll look at how we get the actual email addresses. We have the body. And the body of this email just says your leave request for and I'm just putting using interpolation to put in the content start need to end. It has been submitted successfully and the subject is leave requests submitted. If you want, you can further adjust the date by putting on colon D. So traditionally you would say toString and then specify the format. But when we're using interpolation in this version of C Sharp, we can just put on colon and the formatting, well, the formatting string at the end of it. And that will take care of it for us. So d would give you the long name. Monday, June this date, this year. All right. So that's it for or handler know that our whole application knows about the eye email sender we need to work on the implementation so that implementation of this will live in our infrastructure project. So to get started on that creates a new folder inside the infrastructure project called meal. And then create a class in there called email sender. And then this class, which of course these are the public will inherit from i e mail sender. So we can go ahead and then you realize that it needs a reference to the application. So we need to go ahead and add that. And with that reference at that, we can go ahead and implement the interface. So before we move any further, we need to jump over to new Git. And we need a few packages. Swan is the same configuration extensions that we would have used from the other projects. So a quick way to kind of manage common projects is to just go to this solution and say Managed NuGet packages for the solution, right? So we already have some packages installed in some projects that we need in others like this one, the configuration extensions. So I can click on it and I see that it's already in the persistence projects, but I wanted in the infrastructure project also I can tick it, click Install. And so that can reduce the amount of time you spend on NuGet trying to find the same package over and over. Now the next package that I am interested in is SendGrid. So you can jump over to the bros and just step in SendGrid, and then you can get that latest versions. So you'd want to click it, make sure you're taking the correct project and then click Install. So after that is installed, we're going to start wiring up this class. So one thing that I'm going to do is setup a private field of type e-mail settings and upon its underscore e-mail settings making read-only. But then I am initializing it in the constructor with this i options Email Settings parameter. Now let me explain what this is similar to how we had setup our database. And we said that we have an app settings file that we'll be providing the connection string when the time comes. It's the same way that we can actually pass over. Options are chunks of options are configurations from the, from the whole app settings file, which can be then deserialized into a whole object for us. So what we're doing you seeing get me from the, from the options are the app settings, the e-mail settings, JSON equivalent, and send it over as this parameter. And then we can just inject it in and then have it as our local variable, our field in our class. So this dependency injection is so cool because it makes everything so loosely coupled and malleable. So let us continue with setting up our Send Email methods. So the first line that we're going to have is a client that is going to call or initialize a SendGrid claim. So we're just going to go ahead and add in using statements that are missing. Then after we get the client, we're going to have to get all the subject and the two and the e-mail body from our e-mail object. But the two and the from especially need to be special datatype. So I'm going to say var 2 is equal to new e-mail address. And e-mail address here is coming from sin grid. So new email address. And then we're going to have to pass in email dot tube. And then we're going to have to do the same thing for the from. So I'm going to say var from is equal to new e-mail address. And in that one, we're going to kind of put you in a bit more in the definition where I'm going to see the e-mail is coming from the e-mail settings dot from address. And then the name would be the e-mail settings dot from name. Alright. So we have the from and the to defined. And then after doing all of that, we need to see. Message is equal to male help her. So Male Helper is, that's going to be a static class given to us by sin grid that allows us to create single email. There we go. And you see you have different options to multiple recipients and multiple e-mails. Multiple recipients were just doing single email in this situation. But you can see that maybe you have sent e-mail, send email to multiple send e-mail with an attachment, et cetera. So you could define different methods inside the E email sender. You're not confined to just this one that we're doing. All right? So create single e-mail. And then we're going to have to fill this old according to WHO they've stated it in the, in the parameters for the constructor. So the Fromm comes first. And then we say too, we have those. And then the subject, I can see email lot subject here. And the plain text content would be the e-mail body. And you can see that they have the plain text content versus the HTML content. So based on how you encode your e-mail, you could put that e-mail together accordingly, right? But for now I'm just going to say email dot body for both of those parameters. So now that we have the message object formulated, I'm going to say var response is equal to clients send emails. So this is where we're actually going to send off that message that we just created. Of course, we're getting that red line because we need to be a sink. And once that is done, we no need to return a Boolean based on the response. I can just say return if response thought status code is equivalent to and I can just say system.in it's HTTP status code. Okay. And I believe that SendGrid also caters for accepted someone to do both, some returning. I'm just returning. It's either okay or accepted. So that is going to if it's either one of them is true. If it's neither of them, then it's false and they will know if the email was successful or not. But the whole point of this, once again, I'm going back to our create Tumblr and the way that we wrapped it in a try catch, the API that is calling this handler or whatever code is calling this handler should not be interrupted if everything else. So this is the most important part creating the leave requests if the e-mail fields, it doesn't mean that it should crash the program. So that's why we're catching the exception, but we're not doing anything to throw it told are through all the application. Now to top it all off, we're going to have infrastructure services registration file, just like with every other project before it. So we have the infrastructure services registration class being added to that infrastructure projects. And then we have the same form that all the other distribution classes have had. Once again, I configuration is being injected. And the type that we are, the namespace that we need is the Microsoft extensions configuration, not autumn upper and I'll bring it up every time because it has caught he more than one, sorry. So we're going to see services dot configure Email Settings. So this is all studying it that we want Email Settings. I'll just bring that one in to B relative to a configuration server mouseY emails setting when we do the dot options, we're seeing give me a chunk of the configuration that looks like the object EMEA settings. Well, what we're really going to be seeing is configuration dot gets section. And it will look in the app settings for our section with the name. We will call me e-mail settings when the time comes. So then that way it will know that we will formulate that section to look just like what we're expecting the class to look like. So it will just automatically serialized into that hard coded class are strongly typed glass rather. Next stop we're going to see services that add transient. Now remember I was talking about the different models. We have. Singleton, sculpt and transient. So transient means every time I get called, I am going to be a brand new instance. So we're seeing that every time I email sender gets called, Give me a brand new instance of the, of the email sender class. Alright, so go ahead and include any missing references. And then we can close that one off and then we return services. So that's it for us sitting up our infrastructure, at least that's a very basic level. Once again, any contracts that we will have to define for a third party operation will be defined in our application, but implemented in our infrastructure. 19. Create and Configure Application API: Welcome back guys. In this lesson, we will be setting up our API projects. So we have the foundation already in the form of the infrastructure, the persistence projects or domain and our application projects. The thing is that these will evolve with your application. So these layers are these projects. They're not set in stone, but we at least set the foundation to build an API on top of it that will actually be in charge of toxic information between any client applications and these layers for us. So let us get started setting up this API projects. We're going to place it in our folder called API. We go to Add New Project, find it easily, can just search in the list for API. And we're using the C-sharp API project template, and we're calling this one HR dot leave management dot API. Go ahead and hit Next, and we're using dotnet five are the latest version because.net is very backward compatible. So most, if not all of what we're doing this course will be compatible with future versions of dotnet and we will continue with the settings and create. Now this API project is pretty bare bones. It does give us some sample code in the form of this weather forecasts controller, which we won't necessarily need. Before we move forward, let's add the dependencies that this API project will have. And those dependencies include our application project or infrastructure project and or persistence project. Pretty much any one of the projects that we would set up these register services methods in. Those are going to be dependencies for the API because the API needs to be able to register these services in its codebase. So we can just go ahead and add these project references. And once we've done that, we can continue our configuration. So when we talk about creating and configuring we've created no, we need to configure. So in terms also of the things that have dependencies on what we are going to be sitting up in the API. We're talking about the e-mail settings. So remember that in our infrastructure we would have made mention of the email sender or would've implemented email sender other and it is depending on Email Settings coming over through AI options, e-mail settings, which we already discussed is going to be coming from our app settings file. Another thing that we need to set up is for our persistence layer. Sorry, I'm just picking wildly for persistence layer or DB. Context is reliant on a connection string that will be coming over once again from the API. So our app settings file here, we'll have blocks or sections that have those definitions. So let's work on the first one, and that is our connection string. So above this logging definition here, I'm just going to press Enter and then I'm going to put in my connection strings section. So connection strings. And then it has the same name as what we would have referred to in the persistence redistribution, right? Get connection string. And then it's leave management connection strings. So connection string knows to look in the app settings.js ON look for connection strings, get that one by name. And its definition here is I'm using the MS SQL local DB server, so you have to type it just like how you see it on the screen here. That is built into Visual Studio. Database is equal to, and I'm calling mine HR underscore leave management underscore DB. You can call it something else if you wish. And other than that, and each one is semicolon separated and then we have trusted connection is equal to true semicolon multiple active results sets is equal to true. So that is your connection string. Now for email settings, we're going to have a new section called Email Settings, and then we have that API key. So I need to, we need to go with the SendGrid, then gets our API key suddenness, putting a place holder for null. And then we have the from name, that is where the email will seem to come from and then the from address will be no reply at leave management.com or leave reply. No reply, sorry, at AHRQ.com, whatever it is that you want there. Now in case you're not very familiar with what SendGrid is. It is a two-level product that allows us access over Paul for email API system. And you can just start for free. It's free up to a certain point, of course, and you don't want to abuse it. But once you sign up, they'll give you that API key, which is what you can stick right there in the e-mail settings, in the app settings.js IN. We can do that later on. Of course, I don't want you to see my key because the key is private, so you want to make sure that you maintain the privacy with that. So now that we have our AP settings.js, JSON file at least o fitted with the minimum four or distribution of our services. Let's jump over to the startup.js. So they are not some of the startup.js file. It's basically the container that says these are all the dependencies that my application needs to know about. And I am making them accessible through dependency injection. All right. We've been talking about dependency injection for a while now. And each one of these registration files is basically allowing us to register these as dependencies in an application. So what we need to do is let our API know that these are dependencies each should know about. So when we talked about the icon figuration, you notice that it's being passed into the startup while it's being injected right into the startup. And that allows us to then pass that configuration object into other parts of our redistribution. So we would have seen it in the infrastructure if I'm not mistaken, there we go. I configuration needs the configuration and we do need it for the persistence also. So enough talk, let's get into the action. So in our configure services method, we want to include these three lines, which are the Configure Application Services, configure infrastructure services, and configure persistent services, all of which we know boat coming from our or distance services, service redistribution files in the different projects. So here is the catch-all. So it'll just go ahead and add any missing dependencies and namespaces for each of these. Please note also that we have to pass in that configuration objects. All right, so we injected it into the startup and then we can pass it along so that when those methods are being called in their respective projects and they do have the necessary tools to access what they need to access from our app settings. Another thing that we want to set up in this API is, or what we'll call CORS policy. So our course policy basically determines how the API allows other clients to interact with it. So right now this policy is pretty open. Where does seeing services dot-dot-dot cores builder allow any origin, allow any method, allow any header. Then in our configure method, we're going to go down and I'm just going to stick it here between Use Authorization and use endpoints. And this is lightened up nor that it should use that policy throughout. Now we've come a good way with all the configurations. We have one more step left for this lesson, and that is to generate our database. So we've done all of this setup. We have the persistence layer now we actually have it wired up. No, we're actually passing over the connection string. So now it knows that when it exists, which server it should exist on and what the database name should be. So let us get our own to come configuring our database. So we need to run migrations. And I'm just doing a quick build which was successful. So before we move on to the migrations, we need to get access to our EF Core tool. So go into Package Manager, all that. It will search for tools. And sure enough, Microsoft dot Entity Framework Core tools is the second one in the search results. So I'm going to go ahead and install that in my API project. And with that setup, I'm also going to set the API project as my start-up projects. So I can do that from this drop-down, Many up top, or I could right-click it and say set as startup project. Now with all of that done, we can proceed to our Package Manager Console. So if you don't have it in your menu items or as a toolbar like I do, then you can always go to Tools, go to NuGet package manager, and you'll see the console listed there. In this Package Manager console, we're going to add SHE migration and we can give it a name so I can call it initial create initial migration, something to indicate that it was the first one, right? Another thing that you'd want to do is change the default projects. So the default project pretty much needs to be the same project where your DB context is some setting that to be the persistence project. And for contexts, remember that in our persistence project or a DB contexts, we had said that the model in the non-model creating, we said apply configurations from assembly and wherever this is that assembly, That's pretty much what that line was specifying. So we need our migrations directory here. So make sure that that is set. If you don't, you'll get a nasty error. A boat it being set to the wrong target project. So when we go ahead and add that migration is going to do its magic. And then we get this migration file, which is giving us all of these tables that we had created initially. So quick tour of our migration file, if you're not so familiar, we have an op method and add-on method. The other method basically has code and if you just read it as a C-sharp developer, as an SQL developer, you see what it's doing. It's creating a table with name, with these columns, with all those fix on each column. All right? And then the down means that if you undo this migration, these are the things that will do. It will drop the same tables. So for every up there's adult. So now that we have our migration file existing, we can go ahead and update the database. So the database basically says if it doesn't exist, are created. If it exists, then I'll apply the changes. It didn't exist. So it did create it just snow. And to check and verify that it's created, we can go to SQL Server Object Explorer, expand and expand the local DB slash and Mrs Hill local DB does the server we indicated. And if you're using a different server, then you can proceed to that particular server. And when we expand, it's a basis, we will see our EHR DV management dB. And if we expand the tables, we will see our tables that were defined accordingly. We didn't see any dangers. All the tables are empty. Later on when we start building our own application, we can go ahead and look at how ISI data. But for now, this is mission accomplished. 20. Implement Thin API Controllers: In this lesson, we'll be looking at adding mediator services to our API. Now, the context for this is that media to allows us to specify behaviors in our application based on a response on handler type of relationship. One of the consequences or benefits from this is that we can ship old a lot of the heavy operations from our calling code, which in this case is going to be our controller. And we can abstract that to somewhere else. So the controller really just knows I build a request and I send it off to be handled. So for context, I have on screen the controller from the previous project, which is inspiration for us redoing this need management system. And here you'd see that we don't have thin controllers, we have fat controllers. So they're controller, really. Those exactly what the name suggests, controls the flow and everything that the application does. So it responds to a user's requests for data, et cetera. Now, in the older project, we had what we call fat controllers, or we're doing everything inside the controller. Once again, this works. So it's not that it won't work. If it's done that we will work. But is it the best practiced Holman Tenable is this? Because if I wanted to verify that these operations are being done successfully, I would have a lot of difficulty unit testing it in this fashion because this is not in a union, this is enough inside of our controller. So that increases my inability to test the parts of the application to make sure that they are functioning properly. And it increases my time spent troubleshooting and regression testing and all of those things. Here it's checking if something exists and if it doesn't exist, it's returning not phone. And then it's doing mapping and it's doing a find, and then it's returning the view. Read Andrew, just want as few colors as possible with no business logic. This is some form of business logic. We don't really want that in our application. All right, here's another situation where we're creating our record and we're just doing too much in this option. So those are the things that we want to reduce when we talk about with having thin controllers. So let us get started by installing mediator in our API projects. You know the drill, you can just right-click go to NuGet. We searched for mediator and go ahead and install it. Once you have completed those steps. Once you're done with that, let's go ahead and add a new controller to our project. It's an MVC controller and we want an API controller with redirect options. We're going to start with the easiest one, which is leave types and go ahead and add. Now let's explore how media to will kind of help us along, right? So we already have the basic good options being generated for us thankfully. But then what we need to do is inject our mediator object into our controller. So you're going to work on stroke. And then we're going to make reference to I. Mediator will require us to have that using statement as well as to initialize that field. And then we can start using this media to object to Arcbest streets or cause snow. Quick tour of this controller. And at this point, of course, I'm assuming that you have some familiarity with controllers. And by extension, API development, when we want to get the list of records are all leave types in the database are going to hit this method which is just the gets its API slash leave types. That's what's expected to get to us, but all the leaf types. And then each of these methods would carry over to the other crud operations that are pulls the put, the delete. It is for updates. Or if you're not so familiar with what I'm talking about, I would encourage you to go and check off my API development courses so you can get up to speed on fully appreciate what the intricacies are behind these methods and how they work. So let us continue for the get's that is expected to return all the leaf types in the database. We're going to have code that looks similar to this. We're going to have var leave types is equal to an await. So the method send found in our media to object and all what exactly are we sending? If you look at the overload, it is expecting an object of type or an object with the apartments are called requests, so it doesn't know what kind of request it's going to send the request. And the expectation is that it's going to get something that can be stored inside of leaf types. Know what requests would we be sending? While jumping back to our application project features, leave types and queries. We will see here that we have the get leaf type list request, which is expected to return the list of leaf type detail. So in our leaf types controller, I would say I am sending a new object of type list request. And then go ahead and include any missings using statements. And then I will go ahead and change the method header to public async task action results, list leafed out details. So now it knows that it's expected to return an action results that has that list of leaf types. So this returns statement is no longer valid since no, I will be returning leaf types and look at that two lines. Contexts. I'm just going to compare it with the old code or it's our old our old option would have had the variety of types. It would've actually orchestrated the call to the units of work to get the records, bring them directly, buck. And then we'll be carrying out the mapping because it would be returning the domain objects when we would want the VMs are contexts so that it's the same thing as a DTO. And then we would be doing all of this operation inside of the action itself before returning the view. In the new paradigm, what we're doing is we're just letting media to know that we are requesting the leaf type list or handler is going to carry out all of the operations, is gluing all of the mapping and the querying and everything, and it's returning just the objects that we need to know about. So are the API would never ever interact with the actual domain objects coming from the database. We do all of that transformation before it comes all the way back. So let's fast forward a bit to where I've already written the code, but as usual, I'll go through it slowly and explain each line so you can fully appreciate what is happening. You can pause as needed and replicate. And we will go through together anyhow. All right, so a one quick adjustment to the HTTP GET I've limited return on, okay, with leaf types. All right? And you notice once again, task, action results list the five dB HL. So this is being very explicit as to what data type it is going to return null for the HTTP GET with an ID. Once I've done similar, very similar code task, action results leave diabetes. Of course it has to be async. And we say var leaf type is equal to and then we await the mediator Dotson with the new request. So remember that the request will get handled. Mediator is taking care of that part. We have to make sure that we use the correct request based on what we want. So in this situation, the request is to get leaf type bt EverQuest, which is where it's getting the particular leaf type with all of the intrudes and any other fund dangles or requirements that might be there. For the details. We also need to give it the id value that you will be getting because in our handler, it will rely heavily on the ID to know which record needs to be retrieved. And then we return, okay, with the leaf type. Now in the post, we have a few more lines and it's really just me showing you how you can break it out anyways, but it's not absolutely necessary. Because just the same way that everything could have gone in 19 and we just create the new object right here. I could have done that inside of this one, media to send line. Instead. However, I said var command is equal to a new create leaf type commands. So the post method is designed for creation. All right, so let me start from Desire2Learn BAC chart, but I had of myself there public async task action results. In reality, you don't necessarily have to see what the return type is. It does help with the documentation on the ghetto to swagger. So I would put it back. Regardless in this situation, the response would be an int. So we could see int, but like I said, it's not absolutely necessary. So I'm going to continue without it being necessary right now and later on we will see why it would have been a good idea to put it in. So task, action, result, post, and we're doing from body, and then we're using the specific detail type for the operation. So remember we discussed the you get granular. Leave it general. Well, this is a situation where we would prevent over posting by getting granular because then when they spin, when they send over leaf type information, what we accept through this option is limited to the properties inside of the type that we specified. So unlike leaf type detail or leaf type BTO, which has more details are more fields they create. Leaf type is only designed to accept the data points that we know we absolutely need for our leaf type to be created, so anything else will be ignored. So when we formulate this command, we said new create leaf Command, D, command. Needs leave that detail object. So we pass in that object. And then our response is relative to what the mediator returns with. In this situation, our response was int, because that's what we had said. When you create the leaf type, just return its ID. So that was the handler that we designed for that kind of requests. So then when we return all k with the response, that would be okay with the leaf ID. Now moving on to the put, put is used for the update, once again is in task action results. And then it's by default have that ID parameter and we change from body perimeter to be the leaf type DTO. So we did, we had discussed from earlier that the leaf type BTO, we didn't get too granular with that. I have an update div tag, div TO different from everything else. That's fine. So in this situation, we're accepting all the possible fears are could be updated. And we definitely need the ID inside of this object, which is why we're attaining that. As a matter of fact, does it stuns this ID is optional. So I could actually just say, I don't need an ID parameter. So when you call the put, you don't have to call in an ID. And then this comment would actually just update the comments establish uniformity. So then task action result, put an hour looking for is the body contexts with the leaf type DTO. Remember, all of our validation is happening inside of our handlers. So that's even less. Fewer things tore bullets right here. When we're talking about did the ID comb-over, does the ID exists? All of those things are happening inside of our handler, between the handler and are fluent validations actually. And if anything feels on that side, then this whole operation will fail. Anyway. We'll look at how we handle failures later on. But for now we just wanted to get a handle on how our controllers need to look. So when we send the command to update, we did not wired the soap to return anything, at least nothing useful, right? We just said Unit dot value unit represented a void, so we had to return something, but we just said, okay, you're just going to return something arbitrary to say it was successful. And so the HTTP response that corresponds with a put is usually no contents, which is a tool for, alright, so we can just return that we don't talk to us and the response to any variable. Delete looks similar. Delete takes an ID parameter, task, action results, delete int id. And then we have the command which just takes the ID, and then we send over the command and return or all content. So if it is that you want everything to be just two lines, then that's as easy as taking the command, putting it in the same parameter. And everything can be two lines. All right, so I'm just showing you the difference, the big difference between this controller that is doing everything that this controller is doing with all of the code, all right, with the edits were doing validations here. For the Delete, we're doing some form of validation. Again, all of those things are null, obstructed old into other parts of the application. And our controller can do exactly what it's supposed to do, which is receive a request, make a call to do some operation, and then give you back your data. Now here's my challenge to you. Go ahead and wire up the other controllers for the other types, for the features. So we have leaf types are done. Go ahead and try the requests and leave allocations. You can, of course, create custom options relative to what it is that the operation needs to carry out. But for now, I'll leave you to do that. So I'm going to come back in the next video. We'll compare notes. 21. Finishing up Thin API Controllers: All right, So this is more of a review video then a listen video. Only have a few things that I want to focus on in this lesson that you probably attempted and probably had programs with an if not then kudos to you. So let's start off with the leave allocations controller, really and truly, this is a control that is going to be pretty identical to our leave types control because really and truly we are only doing crud operations here. We're getting the list where getting by ID. And of course if you didn't complete it, you can always just pause and go ahead and replicate as you see me going through. We do the same thing for the post. So you'll notice that everything that pretty much said leave requests is noticing leave allocation. You could almost say that you could have created a new controller, copied everything from the leaf types controller, pasted in the new controller and then just policed the leaf type with leave allocation or type with the word allocation. I'm just giving you tips us the whole you could have done this pretty quickly and pretty effectively, right? Because this one is pretty identical to the leaf types. Leave requests, on the other hand, has one tiny surprise. And that is in the form of the update, right? So once again, going through slowly enough, I pretty much are replicated the options from the leaf types over in leave requests. So most of the content of this controller is identical to the other two, with the exception of our PUT operation. And if you notice and look very closely, I would say put operations x2. So everybody else only at one put operation one, update endpoints. But then in the case of the leave request, we had made room for two types of updates. One where it's a regular updates. And notice that this time we do have the ID parameter in the put. And that is because our leave requests command asks for the ID and for a detail, that's fine. But once again, these different flavors are relative to whichever style you think. Much easier scenario. So I'm just showing you different options. I'm not saying this is how it must be. You have the different options. Use the one that is best for your situation and your project. Now in this situation, once again, we do have the ID parameter we possible with the command and we build a leave request DTO. However, the other updates scenario would have only a change approval status for our leave requests. So in that case, I created a custom endpoint change approval. So to get to this one, you say api slash Requests Controller slush approval. And I really do need the ID. So let me go ahead and add it to the root so it will be changed approval of Slashdot ID value. As I've also updated the documentation accordingly. And I need to put that ID parameter, buck. So tricky told earlier, put it back and build up my request objects, all right, or my command object rather. So I'm just showing you all of the considerations that need to be made. What at the end of the day, our controllers are slim and the benefits of this you may be seeing, okay, so three lines, but all of that work or in the earlier videos, just so I can put three lines here. Why didn't Titus put all the logic here? Which ones again, I completely understand because it would work. But at the end of the day, quote, is far more testable and I don't have to test the controller to know if I would get the results from this end point. Instead, I can go and test the handler that is supposed to be returning those results. And if the handler works, then the endpoint will work. So, you know, it's just shifting your focus from being more compact to being a bit more modular and spreading the code and the responsibility across more pleases so that you can have a better appreciation or a bitter control over what each component does unhedged, they all tied together. 22. Seed Data In Tables: All right, so we're winding down with our API related operations. And in this lesson, what we want to do is seed some default data into our database. The next step, of course, would be to test it, but then it's always good to have some sample data so that we can do the read operations quite easily and effectively. So to see data into our database, we're going to look at how we accomplish that with Entity Framework. Now I've done one already and I have the other two ones done by a, so we can do them together. But let us start off by going to the persistence project, add a new folder called configurations, and in there another folder called entities. And then you're going to have a configuration file per entity. So pretty much this configuration file allows you to put in any Entity Framework related or any database related rather, configurations are defaults or any rules that you want to go over in the particular table in the database will then all of the code is written courtesy of EF Core. So let's look at the one that I have done already, and that is for the leave type configuration. So you can pause, take this off, and then we can go through the bits and pieces together. So we have the leaf type configuration and then it's inheriting from I entity type configuration relative to the class type that we're dealing with, which is the leaf type. So then everything in this basically codebase is going to be relative to the leaf type. So once you do that, you're going to end up with, it's going to ask you to in parliament the interface and then that would generate this method stub for you. So inside of this method stub, we're going to have public void configure. And then we have that int. It's all of that is actually generated for you. So the more the main parts of it, which is what you will be building or writing in, would be in this Builder section. So we would say builder has data. And then we would say leave type or creates a new object. So this is a method has death is a method that has open and close braces. And then in there we see new leaf type and then we fill an object. So this is the domain object that we're seeing your IDs one, your default is to go into databases 10 and your name is vacation. And then as many as you need to, you can actually just coma separate each initialization of an object or instantiation of an object inside of this entire block. No, I haven't done the other two. So I'm going to do them kinda from scratch. Even though by no, you probably did it already with the leaf type. Well, that's fine. So for leave allocation configuration, it's inheriting, but there are no using statements at all. So that's why you're seeing the red line. So I will use control dot using Microsoft Entity Framework Core, the domain reference, and then implement the interface which generates that method stub for me. But there is nothing there for me to configure for the allocation. I don't have a requirement for the allocation at the moment. All right. So I'm not seeing any leave allocation. I'm not changing anything about the defaults on the table structure. I'm not doing anything else. So like I said, this configuration can be used for far more than just seeding data. And if you want a better understanding of what it is capable of, you can always check all my Entity Framework Core course. All right, so we'll do the same thing for the request. And we'll just leave that there once again, we don't have any leave requests that we need as defaults, but we do have some default leaf types, so that's good enough for now. At least we can run the API and do GET requests on this table and verify that it's working. So the next step after writing this code would be to go to our Package Manager console and we need to add migration for seeding leave types. So once you do that, we get to a migration file that is letting us know that it will be inserting that data into the database for us. Next step, as we know, would be to update database. Okay, great. So this time we're not going to go to the database directly to verify that these were created. What we will do is test our API. So see you in the next lesson. 23. Review Swagger API Support: All right guys, welcome back. So in this lesson we're going to be looking at one, testing our API and to documenting it. So the key pool that encompasses both of those tasks is called Swagger. Swagger is an open source API documentation tool based on the open API standards. Now it comes out of the box for dotnet five API projects. So if we jump over to start up and scroll a little whose see here that we are adding the swagger Ginn Library, which creates that Swagger doc with an open API info. So we can change all of these things. I can just see HR leave management API. Give it a title, give you a version. You can add other nodes to it, contact description, et cetera, et cetera. So it's a very powerful tool. And like I said, it's comes out of the box and you didn't put that there. And another part that you would look at is down here in the configure method where it says, if we're in development, then use swagger and API. So if you want to use swagger, because I know of corporations that was used soccer as their documentation in production, then you can always use this salt of that if statement and actually use it regardless of your environment. All right, so you can go ahead and make that change. No, let us run our projects. So I'm just going to hit F5 with the API project as a startup project. And that results in us getting this beautiful document showing us all of our potential endpoints and how they can be called it. Notice we still have the weather forecast. We can delete that afterwards, but I'm just showing you that we didn't do much. All we did was set up our controllers, write the code that we know we need to write. But here's this beautiful document showing us everything about our API. So if I click on one of these, it extends and it shows me exactly what I can expect. So this endpoint, which is R, this behavior api slash leave allocations, which is the get that gets all of the records in the leave allocations table. It's going to give me a 200 success code. And this is a preview of the object that would be getting buck. So once again, I'm going to always go back to details and how granular you get on what you want to display. Notice that in this particular detail, we have the ID, we have the number of days, we have the leaf type with its own ID, the name, and the default is, then we have the leaf type ID. So you might be looking at this and say, well that's kind of redundant. If I already have the leaf type object, I don't want tough to repeat the ID here. And that would be a fear statement, right? So, you know, we didn't get very granular with the leave allocation list DTO. In this situation, we just use the new allocation detail. So we might be sending too many details are too many fields in that response. We can adjust that accordingly. So let's look at leave requests. Leave requests would have ID, the leaf type, the date requested and approved is true. Why does it have this as opposed to the one that is getting the details which has much more. So swagger is actually looking at our return types on the particular control as our options rather. And I'll just jump back over to the controller so you can see what I mean. Remember that I will say in that there is a benefit of putting the return type directly in that I actually result because saga is actually using this to say, okay, this is the data type that will be returned from this auction versus this datatype for that action. Or outside of that, it will just make an assumption. And the, the return type or the fema that you might see might not be representative of what's actually being returned. So that's one of the benefits that are remote said there are benefits to putting the return type here. That's one of them. Saga will infer the objects that it needs to show the schema for in JSON. And that is least a better and clearer documentation for those who will be interacting with your API. Alright, so let us run a test. Let's jump over to api slash leaf types. Since those have been seeded into the database, we should, I expect at least two records after we tried told. So let's hit that end point to try it out and execute on. What I'll do is set a breakpoint on the controller and the handler that this operation should hit. So when I execute to 2Ts, the control or the option rather, right? Remember that's it's going to mediate a Datsun and symbol with that requests no, I'm pressing F5 and then it's going to hit the next breakpoint, which is the handler. So you see when from controller to handler and where the all of the magic is really happening inside of the handler and the handler, we have our repository under mapper. Both initialized injected. And then in the handler, we carry out the query, and then we return the detail version off the data. All right, so I'm just going to hit F5 again and allow it to complete its operation. And then swagger then shows us the data coming back from the database. There we go, named vacation default is ID1, et cetera, et cetera, et cetera. And in the same way, if I go to the get and I tried to tote, then it allows me to pass in the ID. So I'm going to pass in ID1, execute, and then it brings me The Wanderer card with the ID one. Now soccer allows us to do all of the crud operations, at least test ever endpoint that we have laid out. So let us try another one where we're going to create. So notice the difference between this schema and the schema being returned by the detail via gets, right. This one has the ID, the name, the default is this one only has name on default days. That is because of course, we're using a different detail with limited scope. So for the name, I'm not going to change anything here. I'm actually just going to put, leave this default data. Let's execute and then look at what we get. We get a 500 error. All right, why did we get to 500 error? Let's read it closely. So remember what we'll put it. We didn't change anything. The name would've been string and the default, these would have been 0. If you remember carefully, we had setup validation to see that the default is should never be less than one. So what we're seeing here is a 500 error because the API, I didn't know how to handle the fact that it's getting an exception all the way from the handler. And this exception is of type validation exception. Does that look familiar? Right? So it is just letting us know that there was an error and this exception was thrown. We don't have any exception handling built in. So everything that is being thrown, swagger doesn't know what to do is just showing us what the application gave you, which is fine. That's what it's designed to do at the moment. And we will be refining that as we go along. But the point is that this is working. So if I put in something more meaningful and this time I'll put in maternity leave with default days of 90 execute again, then we get back our response code of 33 would be that ID. So I can go back to the get testlet 3 is execute. There we go. Maternity leave default is 90, et cetera. So I'm just showing you how useful swagger is for you to one see the document for the API and to test without having to install any other application. Now that being said, I generally use Postman to do my API testing. But for the purpose of this course, Swagger is perfect and it has served its purpose. 24. Unit testing - Section Overview: We know will have a large portal for application built. We have fleshed out most of what needs to be there for the foundation. We have set up the API and we have tested it to some extent where we see that it's actually communicating with the database and it's working art. But then, as we have seen and have discussed more than once, testing is a very important part of application development. So we just erupt up testing the API Andrea to do that manually, where we are to actually put it into the testing tool for the API submitted. And then we see that it got submitted and all. And we were then confident that our code is working. But then imagine a much larger obligation with many more touch points. We only tested one touch point, just know, but then imagine testing 50, 60 endpoints. You don't have the time or capacity for that. It would be a waste of time to really try and go through all of that. That is why we are going to be looking at unit testing and how it can help us to automate those checks to make sure that our code is doing what we have designed it to do. Now in a nutshell, unit testing is code that tests code. Yes, that's right. We're going to be writing code to test our code. This is one of the reasons people shy away a proton because some people see it as a waste of time because then you'd be writing code twice. And that some people see does completely essential because they don't trust code that has not been tested by a unit test. There are persons who subscribe to either extreme. I'm not necessarily one of them. I believe that tools are used within the context they're designed for and at the time that you're ready for them. So in this situation, it's definitely a tool that we want because we want to make sure that our handlers are behaving consistent with what we have designed them to do. We also want to write what we'll call the integration tests, which will just the interruption between the different layers of our application. Unit testing. Once again, it works so really great in saving time. In the long run. It takes a while to write the tests. And our test is only as good as how it's written. But then there are frameworks out there that can help us to ensure that we have quality tests and that we are having full coverage for our quote, another thing that unit testing helps us with is documentation. So he could leave documentations and comments all over your code. But well-written tests can actually show you our indicate to you what bits of code should be doing here and there. So some people actually use unit testing as a way to document their code in an unofficial and manner. Now in general, there are three main types of tests that we usually cater for. One is unit testing, the other one is integration testing, which ones again, tests the interaction between the layers. And then functional tests which are usually like UI facing to test what the user experience should be. So when we come back, we're going to look at writing our first unit test and we'll be setting up our test project inside of artists folder. And we'll be testing our application logic, aka the handlers, to make sure that they are doing what we think they're doing. 25. Write Unit Tests for Application Code: Hi guys. So let's get started by creating a new test project for our application layer. So in our test folder, in our solution, we are going to create one I called mine HR management application that unit tests. So from start, it would be new project and we're looking for an x unit project template. And then we want to give it the name of course, and it needs to be done at five. Alright, so once you've done all of that, then you can jump over to NuGet and we want to install MOQ or mock and shrewdly. So Mach is going to help us to create mocks of our persistence. There are repositories and other objects that are needed to simulate what our application or our handlers are really doing. And shouldn't is helping us to assert that this is what we expect from this kind of operation. Now I've already gone ahead and set up a folder structure for you. And I think they're more interested in things for stackable than for you to sit down and watch me type. So I've gone ahead and prepared some of the assets beforehand, but as usual, we'll go through them slowly and together so that we can completely understand what is happening on each, at each step. In this project, we have two folders at least for no, So we are 14 leaf types. One for Mach smokes will be hosing the mock repositories. Now you have two options. Of course you can have a file per mock repository, can have one mock repository file and have multiple instances in there. That's up to you right now I'm only using one file and what will only have one test to run. So if you have 50 things are features that you need to run. Our handlers are features that you need to run unit tests against, then it would probably be better to just split them out into particular files per datasets that you would want to have a mockery before. So let us go into the mocks first. So I've already kind of gone through and setup a method in this, in this file. So let's discuss it together. So I made this a public static class and I'm calling a mock repositories. Once again, I could easily have been very specific and say something like mach leave type repository. As a matter of fact, I'm going to take my own submission and just do it this way. So we have this file, particularly for Mach leave type repository, the Toronto Newcomb buck well, mock lever location repository, et cetera, et cetera. So Mach leave type repository is a static class and it has a static method in it. This method is designed once again public static. And notice that it is decorated with this keyword mock, which comes to us courtesy of the mock library, right? And it will allow us to mock any type of repository here. So I want a mock leaf type repository mocking the elif type repository. And then I'm calling the method get leaf type repository. Now this method is going to have a list of leaf type objects. You can be completely fictional with these. There's nothing to say that they must look like what would be in the database. There's no, there's nothing particular about it is just a list of objects similar to what we would expect from the database when dealing with the domain leave type objects, right? So I just have to, you could have ten, you could have 15 or 20 would have more based on your scenario and what you need to test for. You may need more, you may need fewer than that, but that's up to you. So I'm only proceeding with two. All right, then we initialize our mock repo. So I'm going to say environmental group was equal to new Mock. And the type ones again is I'll leave type repository. So until we do all of this, you're going to be seeing our red line beside this method because it's saying that not all paths return a value because we need to return the mock. So before we can return the Mach, we need to set it up. So sample data is present. The new object of the Mott repository is present. So now we need to literally call mock repo dot setup. We use a lambda expression where it gives us access the methods that would have been inside the original repo. So if we want to test the get method or at least set up, they get all methodic means that when a test is calling this mock repo and we want to invoke a test against code that is using this gets all. It will pass in the mock repo. And then we're sitting up, they get all method. So this is one block or it's a setup. Our lambda expression R-dot ghettos L, That's all in one block, and then returns the list of leaf types. So any code that gets the mockery born is going to call the Get off for this repository, the mock. Which will be invoked in the test, will return that list of leaf types. That's what it will deal with. All right, so you can, you're in complete control of your sample data. Tend to T does to get up and started with this Misha with the sample data, but it's all for a good cause. No, the next one would be to set up what happens when we call the add method. Alright, so mockery blue dots setup, once again that lambda block where we call the add method. So all of that is a block. And I'm going to just explain what's happening here. We're seeing our dot-dot-dot normal, but the add method by default needs an object of type leaf type entity. All right, so I'll just leave it on. All right, so we have to pass a leaf type entity into the add method. What we're doing here is that we're kind of doing an assertion that you can only call this method when an object of type, leaf type is being passed in. Once again, this can be a dummy object. It just needs to be of type, leaf type. It doesn't have to have all this data and anything special. All right, so if it is off the leaf type, then this method can be called. And it returns is sink an object. So what we're going to do is do a delegate to say after getting the object of type leaf type, and then we have our Lambda expression or method block them, just move it to the next sentence so you can see it more clearly. Then we say leave types, which is our list dot add. So for the lifetime of the test, whenever somebody calls the add method and passes in that leaf type objects, we're just going to add it to the list of leaf types and in return that particular leaf type. So that's just hold the setup works and then based on the scenario or testing Fourier setup may differ. So you notice that they get all setup looks fairly different from the, the, sorry, they'll add setup. And then the delete one mil differently and then the update one mil differently. So right now we're just sitting or for add and get all. And then we go ahead and return that mock repo. So after returning it, that arrow would go away, and that is what this method is four. So after you've replicated all of that, you can just, I'm sure you are pausing and writing it don't, but I hope you have a better understanding of how the MAC helps us to simulate data without actually touching the database. Now let's jump over to our first unit test. Now this one is going to be queries, folders. I've leave tabs and commands and queries. All right, so the handlers that are doing queries will be tested inside of queries will look and commands in the commands. So the first one that we have is the get leaf type list request handler test. It's almost full, but at least nobody can make a mistake as to what is being tested. Instead of this file, you can have multiple tests of course, but then in this scenario we only want to have, we only need one test really for the guest list handler or no, let me just walk you through what is happening within these first 10 lines. Well, we need we know that we need auto mapper and we know that we need a repository in order to interact with the handler, right? And I'm just going to jump, don't I haven't finished writing the tests that we're going to do that together, but I wanted to just demonstrate what happens when we try to instantiate our handler, we're getting an error. Why? Because a 100 is expecting a parameter of type I elif type repository and another parameter of type high mapper. Now this is a unit test who can't just inject them in. That's 12. We wouldn't want to inject it in. I mean, we probably could, but we wouldn't want to because we're dealing with mocks, right? Don't want to inject in the real elif type repository because that's going to talk to the database. We don't want our unit test to actually talk to the database. We just wanted to simulate it. So what I've done is initialize or declared or other two private fields, one for the mapper and one for our mock repal, right? So private read-only mach of Type II elif type repository people. Then we have our constructor. So in the constructor, once this test is invoked to on to initialize our local mockery boat to be equal to call the mock leaf type repository class dot get leaf type repository method. Alright, so that no, we'll do all of that setup and return the mockery people for use in this test. Now for the mapper, we once again, we're not injecting anything. So we need this mapper to know of all the actual mapping configurations that exist in our application. So what we're going to do is initialize mapper config object. Be a new market mapper configuration. And then we have a lambda expression here with our object block within which we see lambda Tolkien dot add profile for the mapping profile. After we've set up the mapping profile and the configuration, then we can know just pass that didn't. So we say mapper is equal to mapper. Config dot creates a new mapper using that configuration. So in all, our mapper here in artist really is It's representative of the real mapper in the application. So all method below know has public async task get leave type list. So we're specific, know what exactly are we testing for? Where testing for the method that gets the leaf types list. It is decorated with this fact attribute because this is what's telling the obligation that this is a unit test. So this is a saying, Hey, whatever happens here needs to be a fought. Whatever assertions here need to be passed. If they don't pass, then this test has failed. Now let us actually get interact in the hundreds, so are the tests rather. So we say var handler is equal to new. And then I'm instantiating the 100 that I wished the test in this method. Notice the red lines because it requires our mock ripple or some instance of an elif type repository. So mockery Bull gives us a mock object, but then I can get the actual quote unquote actual object through MLK repo dot object. Next stop, we need our mapper. Alright, that looks good. Next up we want to actually test the method call. So I'm going to say var result is equal to Handler dot handle. All right, because there were handler has that method called handle and it requires a new object or it requires the request objects. So we need to pass in a new object of the goal. And I'm just going to take all of this off and uses the insert, the using statement. There we go. So that we don't have a whole of that text. And then after that, it also needs of cancellation. Tolkien, alright, so against leash on can or I can just pass in cancellation Tolkien dotted line. All right, so this bus that on in, yet this spilling red and include any missing references. There we go. So we're just seeing call this. We're not passing any cancellation token in. We're bossy and that new object, so we don't necessarily have to be. So Mach is really just for that, the top. Alright? No, we can see a result. And if you look at the datatype for results, oh, well, we need to await this. Apologies. If you look at the datatype for result, it is actually list. Yeah, there we go. So the data type for the result is a list of leaf type DTO. So it's easy enough to doors statements and say, Okay, well, if the cones is greater than one, then this or something, just to make sure that we got back more than one because the list, so we should get back one or more. But then at the end of the day, we can sit down and think of all sorts of scenarios and so on that we may get overwhelmed. So that's why we have the framework should lead to help us. So I can say result dotplot. Remember that this is really just listed things. It's just a list. But went to type should be off type because I want to make sure that I'm getting the correct data type. The data type that we expect from our handler for this particular call would be the list. The data type would be list of leaf type detail. So I'm going to see and know Mandarin lines for now. Let's just work through this part. Leave type DTO. Alright? No, when I do that, there is pure Larry line because should be of type, you've probably never seen that before. So if I control WC using shortly, so that is where it shouldn't be comes into play. It helps us with our assertions, with our assumptions about what should happen. I'll go ahead and put in the using statement for the DTU also. But then that is the point I'll surely. So I invoked the handler, I run a command or run an operation, got a result. And then I'm seeing that the results should be of type, leaf type DTO. So that's one assertion there. And let's see what does we have Kinsey result dot and the starting, start typing Should I will see all of the possible, right, all of these. Our possible things that we can assert or it should be no, should be in range, should be in order. All of these things are things you can check for. So I can see should be and say T2 because I know for my test data that the shoot, oh sorry. More C dot count should be. There we go. All right. So this should be is just tucked on to any other property or the object itself. And it allows it to get the assertion for whatever scenario expecting. So know that we've said that one. Let me write to the project and say run tests. And that will give us autistic Spore, which will then run the test and show us what past. All right. So that one past four, if it should be two. No, I know I put two, but let's say I said five and rerun this test. Nowhere getting failures. And if I expand that just a bit, who sees should leave assertion should be five, boat was two. So that means if something changed in my code and it mass search on, I know my code is supposed to return to at all times, but then something genes. So let's say I didn't change the test. I know it's too. Right. But then if I put in an mother item. All right, so this is where modifying the code comes in on. And yes, this is all statistic, but once again, we're just talking about sonars. So maternity or if I change the code that should have returned to one, should always return to 18, introduce something. No, it's returning three. My unit tests just by running it will know tell me when something change. So let me just rerun that unit test. And my assertion is still two, but then three game bucks. So right off the bat, I know that there's some bug in the handler that I am dealing with or because something went wrong. So that is kind of what unit testing brings to the table. Now let's create our creates leave type command handler test are. And so instead of commands folder, you can go ahead and set that file up. And it's going to look fairly similar to the get leaf type. We have the same Mach repository being used. We have the same initialization procedure being used. And then we have test, which is the current that of course with the fact and the task where we see create leaf type. So we invoke the handler, which is the create leaf type command handler, singing It's the objects as necessary, and then we have the results. So our result is going to come from our eating handler dot handle create leaf type command. And remember that our leaf type command to create requires a lifetime BTO. So that means we need some object here to pass in null, you have two options. You can create the new object criteria in this handler and use it. But then if you have multiple tests, you can just create an object of leaf type D2 and use that for multiple tests. So I'm going to use the second approach where I'm going to have my private read-only leave that DTO and then I'll initialize it in the constructor. Some people after awhile, they have so many assets being initialized that this constructor grows too large. What they would do is maybe farm ETL to another method that they call like initial, initial setup or initialize. And then they do all these initializations in there. And sometimes assets are actually shared across tests. So what they would do is farm it all to an entirely different file, like a crass dot setup, right Leave type test setup. That will just go ahead and return all of the objects needed. So there are no both ways, but these aren't big tests. So we're just wanted to get the concept under our fingers before we think about obstructing it too much, right? So we have all of that setup and we have our leaf type detail that needs to be passed in. So oh sorry, I'm getting this area because it's the create leaf type BTO, apologies. So let me just fix that datatype and then we should be good to go. And there we go. All right, and of course no ID is in that one. All right, so let us see what we need to assert in this one. For one, when we add a new leaf type tool, our mock repo, then the cone should increase, right? Just the same way when we added to our database, if there were ten, know there are 11. So remember in our mock repo, we had set up the add method that should take any typeof leave type and it should add it to the list. Which means that if I was to call the list or to get all from. The repo right after an add operation, then there should be one more file, then there are null, one more record and there are at the time, right? So there are three null. Let me first the update this assertion on to say three. And then I will call the mock repo. So I'm going to say var leave types. I went to query the repo for Khomeini. So of course we're doing the mock repo dot objects thought, and then we can just get the gets all. So when we invoke yet, all we know exactly what we're getting. We're getting that list. Then I can do my results thought should be. And we know that we are returning the ID values. So after calling the create Tumblr for leaf type, it should be of type int, right, so that's our first assertion. And then I'm going to also say that leaf types dot, dot should be. And I'm going to make sure that it says four because we know we're starting off with three null after the call, it should be for. Now before I run the test, I'm just going to rename it to look a bit more useful, right? So create, create leaf type tests. But what then you can have multiple tests are humans create leaf types of testing for what happens when it's invalid testing for what happens when it's valid testing for what happens when values are particular value and you have a business rule in their work should behave this way or that way based on the nature of whatever is being passed in there. So many scenarios, I can't sit down and think of every scenario, but I'm just giving you the framework upon which you can formulate your tests. So this test is, is an assertion that whenever a valid leaf type is added, This is the expected behavior. So I'm going to jump back over to our handler does so we can understand, again the purpose of unit testing. So testing that we get Buck required, That's easy. Okay, yes, we have the MLK repo where only query in the mock repo, the handler is supposed to do one thing anyway which is returned what's in the repo, that's fine. But then in a situation where we have multiple outcomes in our handler, we want to test for each outcome. And then you can probably break it down into the good test and the blood test. That's at your discretion on what you want to make sure that you have coverage for the potential outcomes. Because if business rules change, if a developer comes in, changes the if statement, even if accidentally, then one of the outcomes will change and you want to catch that as early as possible. So in our create leaf type command, and remember that we're doing some validation here, right? So we validate the leaf type and then we throw an exception. When it is not valid. Then if it is valid, we go through and we return the ID. So we've tested that's already. And we see where adding the thing actually brings back more records. If we were to Git repo. And that's good, but we need to test for what happens when bad data goes into make sure that it is handled correctly according to our business rules. Alright, so that's where we made the assertions. We assert that if I get an invalid request object with an invalid leaf type D DO leave type request DTO leaf ab initio, sorry, that we should get an exception or we should get certain things in their response based on how you wrote your code. This is what should happen. This scenario is MIT. Notice also that because we're at, in unit two, CO start seeing how many references on how many tests are passing for your code. So you can always know that you're in the green or you're in the amber or green or you're not in the green when it comes to the tests passing. So I'm going to jump back over to our test and I have added another test for the invalid leaf type added. Alright, so we have the same code where we set up our handler, right? And then once again that is repeating. So what some people do and what I tend to do at times is move that repeating code up into being a private field and initialized in the constructor. As the testing coal file grows, you may want to farm some parts of it because we don't want to have to do this every single time you have a test, you have to initialize. So it's the same command handler that we're going to be using with a C mock repo and the same mapper. So after sitting on the mock repo, I'm the mapper. We can just initialize our handler to get those objects. So then artist looks, well, we have one less line to worry about, right? So we just say underscore handler, datetime bill, and move ahead. So in this new test, I am going to take my leaf type DTO and I'm going to make it invalid. So remember that we initialized it to have 15 days and that name. But then we know that while we're testing for it, if I'd leave type ever came over with this, then it should be invalid, right? So we're just going to make it invalid. And then I'm going to do something a little differently. No, So we're testing for an exception. Alright, so I'm going to say validation exception e x is equal to 0, it should. So we get a static class calls should not throw a sink validation exception. And then we open parenthesis and you can open and close. And then instead of that you say async delegates Lambda arrow. Now we're going to our weights, the handler call passing in the details. So it literally just this nine we're putting inside of that should throw async method. Then at the end of it we can test to get all the leaf types. So we're going to go into the leaves us, remember we did that up here and we said that it should know before. So after an invalid operation It should still be three, right? So that's what we're distinct area. We're making sure that this did not make it into our mock repo dataset. And then I can also make sure that the exception should not be null. So this is my way of making sure that an exception was indeed throne. So now that I've done all of that, and when you're testing some days you'll get the fetus. So when you get filled as it's either because you need to refine the test to make sure that you're testing the right thing or you're, you've read the test and you're just interacting and it's failing because your code is not passing the test. All right, So two parts look out for. So now that I've done all of that, let's go ahead and run a new set of tests. And I've been getting all green lights everywhere. So the invalid leaf type added all the assertions passed, which means that it correctly did not add it to the repo and it threw the exception because the exception was not null and the valid one was added successfully. And then of course we retrieved the things in the ripple properly. All right, so we can close that Test Explorer, and it was the green ticks here. And then if we look back in our code, what was seeing field is no seeing passing. All right, so you can see how many tests are making reference to this particular handler method in this command. So that's a quick and dirty overview of unit testing and how it can help us with our code coverage. And cordial, of course, the red although tests for your other handlers and explore how the different scenarios can be tested for. 26. Setup ASP.NET MVC Project: Alright, so we're setting up our project in this new module. And what we're going to be using for our UI project is the ASP.net Core Web App, MVC. So our client application or UI could easily have been built with any technology that is capable of consuming a RESTful API. So we have our API that we already set up. But when we're going to be creating this client, We could have used Angular view, react or larva or blazer, any one of these. But I'm really going to stick to the MVC because it's rare that you actually see an MVC up being the consumer of an API. It's usually the all in one anyway, that, and the original project was already built-in MVC. So we're going to kind of keep that MVC feel to it. No problem. But then the underlying architecture has already been gutted and setup. So we'll go ahead and you can just search MVC. And remember that it's an ASP.net Core Web up, not the ASP.net Web application. All right, so you hit next, you give it the name, which, in which case we're calling it h i dot leave management dot MVC, and we are using a dotnet 5 project. You can also enable razor on time completion while you're here. And then you can just hit create. Now once that's done, you're going to end up with this project, which is the standard MVC template boilerplate for a standard MVC application for dotnet Core. You'll see that it looks a lot like our API project, except that it has a few more folders like views and models. Will MVC Model-View-Controller. And then we have the same startup file, the same program.cs file, the app settings. All of those things are kind of Common Pleas. And we have the ww www root folder, which has all four static. So if you're already familiar with MVC where you have no problem, but I will try my best to be best to be as detailed as possible with all the changes and the code that needs to be written and how everything ties together. So when we come back, we'll look at how we can start integrating our API into our clients. 27. Use NSwag for API Client Code: Alright, so we're buck and artist to be is to set up our client application to consume our API. So we already created our MVC application and we have our API documentation in the form of the sogar noch. So we'll be using n swapped to help us to generate, it's literally generates the code that, that would help us to consume this API. And between swagger and in swag, you'll see that in a few clicks, in a few minutes, we will have all the code, or at least most of the code needed to actually establish communication between the client and the API. So step 1, bring up your Swagger documentation and then you can head over to the JSON file that it generates by clicking this link, which will bring up this JSON file, which is basically powering this display that we're seeing here. So the next thing I want you to do is go to Google or Bing and find the GitHub page for n swag studio. Then you can download and install that and it's well-documented, but we'll be going through exactly what you need to do to get it up and running. So you can just download and install it. And once you have done that, you'll be greeted with this beautiful interface. So there are few things that you want to do too. Get started with this procedure. Number one, you want to make sure that the runtime is correct. So by default, I think it may go to net Core 21. We're using net five, so you can just go ahead and change that. And then the specification URL needs to be the URL to the JSON file coming from swagger. So you can just copy that from the browser and paste it there. Click create local copy. And then you see that documentation up here below. Now with all of that done and just the notes that in swag those supports other types of client applications, meaning TypeScript. If, if you were using view or reactor, one of those JavaScript frameworks, then you could just as easily generate code using and swag for those types of frameworks. However, we're sticking to the C-sharp clients, so you would just take that and you get that new tab. And then there are a few settings here. Let me just scroll to the top. There are a few settings here that we want to make sure are in police. Number one, setup the namespace. So for namespace I have HR dot leave management dot MVC dot services. That's the namespace I intend to have all the code generated into. There are other things that may be ticked already, but I'll just go through and make sure that we set up the card F1. So use the base URL for the request. You want to make sure that's ticked as well as generates the base URL property must be defined on the base class. Alright? You want to make sure that you have the inject HTTP client by a constructor ticked and generate interfaces for client classes. Scrolling on, I think most of these are already there by default.