Transcripts
1. 1 - Introduction Promotional: Ever worked on a brand new project where
in the beginning, everyone was so productive. New features were quickly
integrating new production. Managers were happy, and the team enjoyed working
on that project. However, as the time passed, the speed started to decrease. Bugs began to emerge, and the team found
itself spending more time fixing bugs than
creating new features. If not, all of the tests were
manual is low and painful and the overall happiness of the team hit in all time low. Or maybe you worked on that legacy code that only a
few engineers dare to touch. Understanding what was going on, take hours of reaching the code. You change the method and inadvertently break a
completely unrelated feature. All of these are symptoms
of bad software design. Code like this is
frequently responsible for bankrupting companies
and failing businesses. It is also responsible
for high levels of stress and unhappiness
among the development team. Problem is not new. You and I have definitely already experience these
situations before. We've seen teams rewriting entire applications
from scratch, trying to solve the issue, but instead ending
up in a similar, if not worse situation
than before. Most of the time, this happens because the
team that rewrote the application
was the same team that created the initial
mess in the first place. This happens because teams and software engineers might
know some design patterns, might know a little bit about
this or that architecture, but might not be able to clearly understand how to
put them all together. They might not know
how to separate business logic from
external dependencies. Some don't even know
what business logic is. I've seen plenty of cases. Some engineers don't know
how to invert dependence, how to make their
code testable or to differentiate business logic
from application logic. Some cannot write good
unit tests or might not even know the difference between unit and
integration tests. My name is Gregory Pacheco, and I'm a software engineer, pastated about software
craftsmanship. I've been a software engineer
for the past 12 years, and I've taught more
than 54,000 students in more than 100
different countries in different technologies. I can help you and your team to achieve more in less time. I prepare this course, especially for coders and
software development teams facing these exact
same problems. In this course, you are
going to learn how to design high quality software
applications that are easy and
pleasant to work with. Applications that can
grow with the business, accommodate changes
and new features and are self tested
with good quality, meaningful unit and
integration tests. In this course, you are going to learn hexagonal architecture, along with domain driven design, TDD, and CQRS. At the
end of this course, should be able to write good quality code that is decoupled from libraries
and frameworks. You will know how to separate
core business logic from IO and how to test
them with unit tests. This course is 100% hand zone, where I'll code a demo application
using C sharp and.net. However, the learnings
provided in this course can be applied to any programming
language or frameworks. The course is delivered
through videos featuring encoding sessions and is
enriched with charts, animations, and
additional PDF material to provide the best
learning experience. The code written during the course will be
available on GitHub, allowing you to review and
practice at your own pace. If you like what you hear, give it a try with a ten
days money back guarantee.
2. Class 02 - Understanding Hexagonal Architecture: Hexagonal architecture,
also known as Ports and adapters
architecture is a software design pattern that aims to improve the testability, maintainability in
the overall quality of the software system. It does so by decoupling the various components
of the system and organizing them in a way that promotes modularity
and flexibility. The hexagonal architecture was proposed by Alistaer Cockburn in an attempt to avoid non structural pitfalls in object oriented software design. Of the key principles of hexagon architecture
is the separation of core business logic
of the system from the external interfaces
that interact with it. These interfaces,
known as ports, can be either input
or output port, and can include
things like APIs, databases, and user interfaces. The core business
logic of the system. Contained within adapters, which are responsible
for translating the requests from
ports into actions that the system can
understand and vice versa. One of the main advantages of hexagonal architecture
is that it helps reduce coupling between the various components
of the system. By decoupling the
core business logic from the external interfaces, the system becomes more modular
and easier to maintain. This makes it easier to change the system without affecting
the overall functionality and allows for more
flexibility in terms of how the system can be
extended or modified. The benefit of hexagonal
architecture is that it improves testability because the
core business logic is separated from the
external interfaces. It is easier to write automated
tests for the system. This is especially important in an agile development
environment, where the ability to quickly
and easily write and run tests is crucial to maintain a high level
of productivity. In addition to improve
testability and reducing coupling,
hexagonal architecture also has the potential to
increase productivity. By organizing the system in a more modular and flexible way, developers are able to work on different parts of the
system in parallel, which significantly
reduce the time it takes to develop and
deploy new features. For our class, we're
going to develop an application for
customer base management. Where we will have the domain
side of the application, exposing secondary ports
like Customer repository, with methods for
saving, pedaging, the leaching, and retrieving
a customer from database. And the application side, exposing primary reports
like Customer Manager, with methods like GT user by ID, disable user, data user
address, et cetera. Besides that, we will structure the project folder in a way that separates concerns like user
stories, from domain model, ports from adapters, application from
consumers, and et cetera, making it easier for
other developers to rapidly find anything
within the application.
3. Class 03 DDD and CQRS: Domain driven design is a software development
approach that focuses on complex
interconnector nature of business domain. It aims to bridge the gap between technical
implementation and business strategy by aligning software design
with the language and concepts of the
business domain. Here are some
concepts of DGG that we're going to leverage
during this class. First, bounded context. Concept involves
defining the boundaries within which a specific
domain model applies. It helps to ensure that the model is relevant
and cohesive, within a particular context. I avoids misunderstandings and confusions when multiple models are used within the same system. The second concept
is domain model. The domain model represents
the core concepts and relationships within
the buses domain. It is used to express the
business logic and its rules. It typically consists
of entities or objects with identity,
value objects, objects that does not
have an identity, but has meaning, and
aggregate roots, objects that represents
a consistent whole and are responsible for
maintaining invariants. An aggregate root is basically
a cluster of entities. Finally, we're going to
leverage ubiquitous language. Ubiquitous language is nothing more than a shared vocabulary, that is crucial for
effective communication between domain experts
and software developers. Domain driven design
encourages the creation of ubiquitous language that use consistently throughout
the development process, including encode and
design decisions. One practical example of
DDD in action might involve a retailer company with an e commerce
platform, for example. The business domain
for this company might include concepts
such as products, orders, customers and payments. Domain model for
this domain might include entities for
each of those concepts, as well as value object
for things like shipping, address, and payment methods. Since all of them doesn't
have an identity, but they have meaning like an address or
a payment method, might not have an ID on
the database or something. But they see meaningful, so they are a valuable object. To implement this model for
the e commerce example, the software developer
might create bounded context for the
commerce platform and use ubiquitous language of the domain to name the class
and methods in the code. They might also define
aggregate roots, for orders and customers
to ensure that all related data is loaded
and saved consistently, and they might create
a service to handle the process like placing an order and process
in a payment. This was just an
example on how we could apply domain driven design
examples to an e commerce. For this class,
we're going to use hexagonal architecture
combined with DDD and as well, CQRS, CQRS is command query
responsibility segregation, basically an architectural
pattern that separates the responsibility of reading and writing data
within the system. In a traditional system, a single data model used for both reading and writing data. This can lead to issues
with performance, scalability, and complexity
as the system grows. CQRS addresses this issue by separating the data model
into two distinct parts, a command model
for writing data, and a query model
for reading data. This separation
allows the two models to be optimized for their
respective purpose, resulting in a more effective
and scalable system. For example, a command might be optimized writes using
techniques such as batching, a synchrous processing to
handle large volumes of data. The query model
on the other hand might be optimized for reads, using techniques such as caching and de normalization
to improve performance. For this class, we're going
to learn how to replace primary reports with
commands and queries so that in the future when your
application is to scale, you can separate read
models from write models, having requ handlers to read
from the specific database, and your comments to write on
another separated database, and you can even
apply the techniques that I mentioned before
for each of those.
4. Class 04 - Preparing The Environment: Now you are going to prepare the development environment with the twos needed to
proceed with the class. Here, I have a brand
new windows machine running with nothing
in stalled on it. This way, I guarantee
that everything needed for this class will
be in stalled only go. Let's open the browser, search for dotnet framework. At the time that dis
class was recorded. The most recent version
of.net is.net seven. However, the knowledge
provided on this class can be applied to any version
of.net, even older versions. Select the appropriated.net
microsoft.com website. Here, you should find links for downloading dotnet for
Linux, MC and Windows. In my case, I'm using
windows with 64 bits. And this is the link
that I'm going to pick. After the download is completed, let's open the dot XC file. That we downloaded and proceed
with the installation. Next, we are going to
open the browser again and now search for
VSO Stud Commute. This is the ID that we are
going to use for this class. But feel free to use any other tool that you're
most comfortable with. The current version of VSOE
Studi today is the 2000 2022. But again, any older or newer
version should suffice. Select VSO ISG Commute on
the list and click Download. After the download is completed, let's execute the EXC to
start the installation. On the workload stab, only select the asp.net and web development
and click Install. This process might
take several minutes. I will speed up the video and
come back when it's done. Great. Now the environment
is ready for coding.
5. Class 05 Creating the project edit: Now we're going to start
a hand zone phase. The first step here is to
create the.net project. As mentioned before,
in this class, we're going to develop
a simple application for customer management, where we're going to
be able to create, update, delete, and
list customers. Imagine this customer management
application being one of the micro services of a big e commerce platform that has many other
services like. Payment service,
shipping service, product management service, or a catalog service,
and et cetera. We picked the customer
management service as the example for this class. However, everything that will be applying on this
service should be applicable as well for any other service that
you might want to create. Let's go to Visor Studio, select Create Project, and
search for blank solution. Select this first
option and click next. Give it a name and select the appropriated folder
where you want to save it. I'll call my
solution E commerce, and I'll keep the
default folder. Now, click on Create. Now that we have
our blank solution, we are ready to start
structuring the project to accommodate the agon
architecture application. But before that, we
need to use DDD two, define the bounded context
of our microservice, and define the ubiquitous
language for that context. Let's do that on the next class.
6. Class 06 Defining the boundaries edit: Before start coding, the first
thing we need to do is to define the boundaries where our customer management
service would leave. Defining boundaries is a
very challenging task, and it's very difficult to
get it right straightaway. As mentioned before, our customer management
service is a part of a bigger e commerce platform with many other
components like payments, shipping, products,
and et cetera. In a real world application, this process should start by
speaking to business people, product owners and
stakeholders to understand how they name
things on their business. Calling stuff by the
right name helps to make sure your application speaks the same language as
its future users. In this example, some companies might prefer catalog rather than products for
an application that manage the list of
products it will sell, or might use the term shipping, rather than delivery,
and et cetera. This is the process of defining
the ubiquitous language. Whatever comes out of this process should be
used to name services, entities, API in points, classes, methods, and et cetera. Let's suppose we've
done a little bit of talking and came up
with some terms. Let's define some
of the boundaries. Let's go to our solution
and highlight what we think an e commerce platform would look like in real life. Let's add folders to delineate the most common boundaries
for an e commerce, including our customer
management service. Firstly, let's create
the customer folder. Then the products
or catalog folder. Now, let's add the sales and
finally, one for shipping. In this class, we are only going to co the customer
management service. However, this exercise is
useful to show what is the boundaries definition
and how important it is to separate concerns
within the application. Having stuff that
gets changed to live together helps to reduce
coupling and increase cohesion. This way, if at some point you decide that a certain model
can be deployed separately, it will be easier to do so. You can even decide to
publish that as a library, you get package that can be consumed by an API
or a web project. Since it will be a standalone
independent component that has all the external
dependencies like databases, SMS services, e mail service, access to file system, and et cetera injected
through dependence injection. Thanks to the
agonal architecture that we're going to leverage.
7. Class 07 The domain objects edit: Now we are going to
start developing the domain objects
of the application. The domain objects holds the business logic
of your application. Logic that would
not change whether the model is consumed by
a public application, like the e commerce
store itself, or a private application, like the internal system that the company uses to
manage the e commerce. Some examples of common
business objects are entities or objects with identity
or an ID, like customers, products, purchase,
and et cetera, value, objects that has meaning, but doesn't have an in
identification like IDs. Some examples of value objects
are customer documents, currents, date of
purchase, and et cetera. As well, we can have
ums and exceptions. All of them compose the
common business objects. Now, let's go to
VSO Studio and add a new subfolder called CR inside of the
service customers. Now, let's add a class
library into the CR folder. I'll select.net seven. However, I said before, you can use any past
or future.net version. Name it domain and click Create. Let's rename the class
one to customer. Let's add a couple of
more classes four. Ms exceptions ports. And value objects. Now, let's define the properties of our customer domain object. Initially, it's
going to have an ID, name, surname, e mail,
and customer ID. As you can see, document ID
is of type customer document, which is our first value object. Value objects are
mutable objects that represent simple entities, whose equality is
based on the values they hold rather
than their identity. In this example,
customer document will hold the information of a customer
identification document, which is composed by the
document type, like a passport, drive license, and et cetera, and the number of
the document itself. Let's create the customer
document value object inside of the value
object class. Now, as you can see, it's complaining about document type, which is another
important domain object. In this case, an um that will have the document types
accepted by the application. Let's define the document type
om inside of the NS file. Perfect. Now, our customer class is ready to start
receiving logic code. Let's start by adding a method to determine
if an instance of customer is valid or not
based on some business logic. Let's add a method called
validate state that will throw a domain exception when the customer document
is not valid. Now, let's define the invalid
customer document exception inside of the exception file. Now, let's add another
piece of validation that makes customer name
and surname required. Otherwise, we'll
throw an exception or a domain exception. Let's define the
missing requiring information exception inside
of the exceptions file. Finally, let's add
another validation to check if the e mail
is provided or not. Otherwise, throws
another exception. Let's remove some unused ports. Finally, let's be the solution to make sure there is
no compilation errors. To finalize, let's add a public metal that
can be call elsewhere, that will check if an instance of a customer is valid or not. If it is valid, it
will return true. Otherwise, we'll throw a nicely named domain
exception that can be captured and translated to
message for the Pinel users. Great, we just finished defining our first domain object
that besides being simple, uses important elements
like value objects, domain nums, domain exceptions, and can be easily unit tested.
8. Class 07.1: One of the great advantages of working with domain models, like we just did,
is that it makes easy to add unit
test to our code. Now, we are going to look
on how we can easily cover our customer domain entity with meaningful unit tests. Before we start, let's make sure that all
components within the domain layer
have appropriated name spaces that tells
exactly what they are. After that, we solve the
imports on all references. Let's open each class and
set their name spaces, like value objects, UMs, ports, exceptions,
and et cetera. Now let's create a test project. We start by adding
a test folder, then we add unit projects
into that folder. I'll call it domain tests. I like to have all the imports on the file that I'm working on. I will delete the S file, and I will move the import
into the test file. Now I will rename the file in the class to
customer entity tests. Now we are ready to
start adding tests. I will start by adding a test, that asserts that an instance
of the customer entity, when having all the
required properts, returns when the method
is valid is called. The test to will be
called customer is valid. It starts by instantiating
an instance of the customer and giving it all the required
properts like name, customer document, and e mail. After that, we call as true
passing customer is valid, and the test expects that
it will return true. Let's fix the missing references by adding a reference
to the domain project. Now we can add the
using reference for the customer document, value objects and nums. It is important to mention
that what we just did was exactly what the agonal
Architecture tell us to do. The domain test projects is what depends on the domain,
not the opposite. The flow of dependence
always points inwards. The domain layer should never
have any dependence on te, API technologies,
frameworks, or dabaes. After that, we need to build the solution to make
sure that there are no. We can already see that our test is being listed
on the test explorer. Let's run it to make
sure that it passed. Perfect. Now, let's
add another test. This test would be
called should throw missing required information
when name is not provided. This test basically instantiates
an instance of customer. However, it does not give
any name, and when we call, the test is expected to domain exception called missing
required information exception. This test helps
to guarantee that the validation not only
does what we expect, but as well throws an
exception that can be captured and handled accordingly by the application logic, and its use cases. The next test will be called shod throw missing
required information when surname is not provided. This test is very similar
to the previous one. However, a test when the
surname is not provided, it expects the same domain
exception to be thrown. The next test should throw invalid email exception
when e mail is invalid, is not passing an e mail when
instantiating the customer, and it will assert for
another type of exception. On top of that, it is validating the message that
exception is provided. The next test throw invalid person
document exception when document is invalid, provides an invalid
document ID and the text expects a different
type of domain exception. And finally, the
test should show invalid person
document exception when document type
is not provided. Does not provide
the document type. An invalid customer document
exception is expected. Let's run them on now to
make sure they all pass. Great. All tests are
passing as expected, and we are covering most important aspects of the domain entity and
its business logic. As we could see in this class, by isolating the business logic within the customer entity, the testability of
the business logic became so much easier and
straightforward to do. We added one happy path test and many other negative
tests where bad data is in and the right behavior from the application
is asserted.
9. Class 08 CreateCustomer command edit: Now that our application
model are complete already, we can begin working on the use cases for
our application. Use cases represent the actions that our application
will be able to perform, such as registering a customer
or disabling a customer. These use cases also
embody business logic, but at the application level. This means that the
implementation of the use cases can vary between applications due to the different
business rules. For example, a public website
might require a customer to log in with Google and Facebook before completing their
customer information, and only allow a single
customer registration per user. In contrast, a
private website used internally by the company
might allow certain users with special permissions to bypass certain rules
such as creating a customer without
logging credentials or registering multiple
customers at once, like importing a CSV file. These are examples of
application business rules. In hexagonal architecture,
there is a clear separation between application
business rules and the rest of the system. However, in this course, we are using some concepts of DDD for having an additional
layer of isolation by distinguishing between application level business rules and the domain business rules. Which increase the
codes reusability and reduced business
logic duplication. These allow us to reuse code
from our domain objects, such as those found in
the customer class, nums exceptions, and so on,
in multiple applications. For instance, consider
the example of a company that has both public
and private application. While the two
applications might have distinct implementations
of use case, they can still share the
same domain business logic, such as rules that
enforces the customer to be 18 and have a
valid document ID. Now that we understand
the difference between application business rules and
the domain business rules. Let's start by creating the class library for
the public website. And another one for
the private website, just so that it makes it
clear how in the real world, the example mentioned above would be structured if we were to keep both code bases on the same solution
and repository. However, in this class, we're going to implement the use cases on the
public website project. But I believe I made
the concept now. Note that the domain level
business will live inside of the domain project
and it doesn't necessarily need to live
in the same repository. You might want to have it
in a separated ripple, having a nugat package
published to a feed that can be consumed by both public
and private application. In fact, this is the
approache I prefer the most. The first use case
we're going to implement is
creating a customer. As mentioned before,
we're going to use CQRS to separate
commands from queries. Command query responsibility
segregation or CQRS is an architectural pattern that separates the
responsibility of writing data, a command from reaching
data, a query. In a CQRS system, the right side and the read side are separated
into different modes. Which allows for optimization
of each side independently, and can improve scalability and performance of
the overall system. The write model
handles commands, which are requests
to change data, an action that will cause a
side effect into the system. Why a rid model handles queries, which are requests
to retrieve data. Secure risk can be implemented using a variety of
techniques and technologies, such as event sourcing, eventual consistence,
and a mediator pattern, which is the technique we're
going to use in this class, using a library called mediator. Mediator is an open source
library that provides a simple and opinionated way of implementing the mediator
pattern in.net applications. In the context of mediator, requests are
represented as classes, and handlers are
classes that implement the specific interface to
handle those requests. When a request is made, it is sent to the mediator, which then routes the request to the appropriated handler. The handler process the request
and returns a response, which is then returned
to the caller. Let's install the mediator and the mediator contracts on
the public website project. Now, we are ready to start
coding the use case. The first part of it is
defining the command that will be sent when a
customer will be created. Let's create a file
call command and define the create customer comm class that inherits from request, which is an interface from
the mediator library. I request expects a
class for the response. Let's create another
file called response. Inside, we define
an abstract class with the proper
sucess and message. Now let's define
another class called customer response that we
inherit from response. This is the class
that we're going to map to the mediator command. Now, let's add a property
type customer DTO called. Now we need to define
the class customer DTO. DTOs are data transfer objects. Simple objects that are used to transfer data between layers of the system in
the application. DTOs are useful to transport data between
layers of the application. Instead of transmitting
domain models, you should use DTOs instead. Since domain models might have properties
and data that you might not want to send to an external
layer of the system. When defining a DTO, you should only add
properties that you know that can be
passed across layers. Let's create a file called
DTOs and define the DTO class. For now, only add properties for name surname
document and e mail. Let's add as well a
simple static method that knows how to translate from
DTO to a domain object. To solve the reference issues. Let's add a dependence
to the domain project. Now, back to the response class, let's define an on
call rhor codes. On it, we will add
some common errors like not found
invalid person ID. Invalid e mail and et cetera. After that, we add a proper call her codes
to the response class. This will serve us as
a way to communicate issues to the adapters on the external side
of the hexagon. This will serve
us as a contract, where every time a
certain or occurs, we will be sending
the same EHR code, so the programmers integrating
with our application, intercept those ho code and handle them accordingly
on their side. This will all make sense
when we see inaction. Now, back to the create
customer command, pass the customer response
to the I request, and add the property
called customer DTO to the class. Perfect. Now our CQRS command is ready. The next step is to implement
the command handler. The command handler will be responsible for
executing the logic that needs to happen every time a create customer
command is triggered. It will grab the customer
data from the customer DTO property of the
command and we will save it to the
database by invoking a database adapters that will be injected through the
dependency injection. For now, let's just define the customer command
handler class. Let's create a file called command handlers
and inside of it, let's define the create
customer command handler class. This class should inherit
from request handler, and it expected two
types definition. First, the comment
that this handler will expect and the response
that it sends back, which in this case are create customer command and customer
response respectively. Now, let's just implement the interface to create
the handle method, but for now, we will not add
any implementation for it.
10. Class 09 GetCustomer query edit: Let's now implement a query to retrieve data
from a customer. As previously stated, CQRS separates the operation of writing and reading
from the database. Allowing us to optimize commands for writes and
queries for reading. For example, by using
a red database where heavy queries can be pre
processed and results stored. A query would only
need to fetch data ID and the whole data would be retrieved in a matter
of milliseconds. Although we don't have heavy
queries in our example, it's easy to see how this separation can benefit
a real word application. So on our public
website project, let's create a file
called queries do CS. Inside of it, similarly
to the command, we will define a
class that uses the I request interface and pass
the customer response time. This class will have
only one property, the ID of the
customer bring query. Now, let's create another file called Query handle Dot CS. And define a class called
Gt customer query handler. This class, like the
command handler, implements the customer
handler interface and pass the input type get customer query and the output
type customer response. Finally, let's implement the handle method
from the interface. With these steps, we now
have our command query along with the respective handlers
ready to receive code. Oh.
11. Class 10 Preparing the database edit: W Let's connect our
application with the database. One of the greatest advantage of hexagonal Architecture is that the database is just a
detail in the architecture. The architecture itself is
not centered around the DB. That being said, we just need to determine the contracts
or parts that the DB adapter will
need to implement in order to have access to a
certain type of storage. When defining the parts, we need to make
sure to not couple with the technology like
creating classes or methods with names
like acute SQL for something or
performing update SQL, or even queering customer
document, et cetera. Exposing words like
SCO or document. In the name of our classes, you're leaking the details on what is behind the port, rather, prefer names like get
customer or update customer, even I provide customer
information, et cetera. This way, If we migrate from a CDB to a document
DB or a file storage, the naming still makes
sense for the developers. This plug and play
characteristic of hexagonary clean
architecture is by far one of the greatest advantage of this type of architectures. However, it requires
discipline and attention to details when it comes to naming and structuring
the solution. Let's create a file call parks and define an interface
called provide customer name. With a method called
GT customer name AC. Now let's create the
adapter for that port. We start by creating the
adapters folder inside of it, we create a new class
library called Data. For this example, we'll be
using entity framework. However, feel free to use any other database technology
that you might prefer. Let's open the New
GT Package Manager and search for Microsoft
entity framework, and we'll be installing the design SQL server,
and tools package. Now we are going to create the customer DB context
inside of the Data project, and then we define a
customer DB context class that inherits from DB context. The constructor accepts
a DB context options object with a generic
type parameter of customer DB context
as an argument. The constructor pass
the options options to the basic class. Finally, add the DB set
property of type customer. Add a reference to
the private website. Now, to solve the
reference issue, add a reference to the domain and private website projects. If we had a connection
string wired up and were to try to migrate
the DV at this point, we would get an error saying that it couldn't save the value of customer document property because it doesn't have an ID. This is expected. Entity
framework doesn't know that this property actually
represents a value object. Value objects do not
represent entities, so it doesn't need an ID. It needs to be treated as simple properties of
the customer class. To fix it, we're going to create a customer configuration
file that will describe to entity
framee work, what to do. Inside of the data project, let's create the customer
configuration CS file and define the customer
configuration class that implements the entity type
configuration interface and pass the customer
entity as the generic type. Now we define the
configuration method that receives a builder
as a parameter. In here, we map the value object properties
to the customer entity. Finally, on our DB context, we overide the method and map this configuration to the entity framework builder. Perfect. Now you're ready to set up the connection string
and migrate the D.
12. Class 10.1: Before we can migrate the TV using a preconfigured
connection string, we need to set up the
dependency injection. Usually here, here's where
most of the developers makes a common mistake that breaks hexagonal architecture and clean architecture
a little bit. Most of the developers, they put the configuration of the database and
dependency injection into the API project or the
eSp net web project. This is a mistake. Either on hexagonal architecture
or clean architecture, your API or web
project adapter should not be aware or dependent
on another adapter. In clean architecture. Your API project is on
infrastructure layer, just like your DB component, and they should not be
aware of each other. The direction of dependency
should always point inwards. I know that it might sounds
a little bit purist, but the benefit
that it brings to the table is that you can swap your DB layer at any time without breaking the consumer
of your application. I know that swapping
database after the system is on
production is rare. But think about
another example of adapter like an e mail sender, for example, or
external API that your system consumes to
check for currency rates. These are much more
common example of adapters and we might change
them once in a while. If your consumer adapter like a web application is
directly coupled to it, because it directly references an adapter that is
being replaced, you're causing an impact because They will need to update their
dependency injection, et cetera, especially when
it's another team that takes care of the web
API or web application. They might have different
ways to accommodate chains. They might be in a
different sprint in a different context and might not be able to
date it straightaway. Since it requires tests, production release,
and et cetera. Coupling can go way
way beyond just code. It's much more
complex than that. This is why we want
to manage that well and wisely.
How do we fix it? Well, to fix it, we're going to use an
approach that Uncle Bob mentioned on Chapter 26 of
his book, clean Architecture. He calls it the
huge made component or the dirtiest of all
the dirtin components. There is where we will create all the factories, strategies, and other global
facilities and then hand control over the high level
obstruction of the system. There is where we'll
connect the B, set up credentials
for an e MO sender, set up a St buckets, wire dependency injection, and
injecting into the system. This way, we isolate adapters dependencies and
keep everything clean. If we migrate D V adapter
or an email center adapter, we don't need the web
API tin to do anything. We just go to this
dirty component and we rewire the same interface to the new DB or
emo center adapter. It's as simple as that.
Isn't that awesome. Think of this component
as a plugging. Being a plugging, you can
have multiples of it. With different setups,
different configurations, and decide which plugin you want to use depending
on its scenario. Let's start by creating a console application
called Start. This is going to be the
new starting point of our instead of the web API or the website that might
consume this application. Again, Adapters on
agonal Architecture are just the small temporary
and replaceable details of the overall architecture. We don't want it to be the
orchestrator of the solution. On the program CS of
the starch project, we will be configuring the
DB from a connection string, assembling the
dependence injection and injecting it into whatever
consumer want to use it. Let's start by creating
the upsetting file, and let's add the
connection string to a local DB that
doesn't exist yet. After that, let's add the New GT package Microsoft
extension configuration and Microsoft extension
configuration Jason to the project. Now, on the programs of
the start application, we load the upsetting into a variable builder of type
configuration builder. Now, we read the connection
string from the settings, referencing it by the name. The method configuration dot
cat value is not available. Solve, let's install
one more package called Microsoft extension
Configuration binder. After that's done, we can
create the Web API project. For this example,
I will just create a ASP Web API project, and I'll call it Web EPI. And I will check the option to not use top level statement. Once the web API is created, I will define a class called API configuration inside
of the program CS file, and I'll define a constructor
that receive an array of strings as arguments and an
action object called options. Here, options will be the dependency injection scope that will come from
the start object. Here, options will be the dependence injection scope that will come from
the start project. This will make it possible to reuse this web EPI
in different ways, where the list of the
adapters, connection strings, ed part services and everything else will be injected
here instead of Hart. Now, inside of the constructor, we instantiate the web
application builder passing the application name. And after that, we
invoke the options past using the Builder
services as arguments. After that, we add
the controllers. Now we define a method
to ca R A sync. And we move the rest
of the code into. This way, we expose a
method that can be from outside that will start the
web application for us. Hey. Let's go back to the program CS of the
console application and create a new instance of this new API
configuration class, passing the RC and an
options object that includes the DB context that we created previously injected
with the connection. As we can see, it is not being able to read the
connection string. That is because we need
to set the upsettings do to be copied to
the output directory. Let's right click on
the settings file and go to the properties. Then set it to copy. Now let's call the A from
the API configuration. That's being done.
We have two ways of starting the web
API application, one from the web API itself, and another one from
the start console b. We don't want that. We want
the web API to be consumed as a library to force
it to be injected with all the configurations
that it needs to function. For that, let's edit
the web APICS project, and add a tag caput type
with the library as a value. Perfect. Now, let's run the application to see
if it works as expected. Test again and see what happens. If when accessing
your application, you're seeing the
Microsoft SP net, fail to determine
the HTPS port for re director or gting four h four when accessing
one of the end points, just want to try this. You add these two lines, setting the current directory
to the base directory, and you pass the
application n the same as your web
application name, and it might just
solve the problem. So because it loses
the reference to where the launch file is located,
it cannot find it. So it doesn't know how
to deal with the ports. So here now you access HTP and the port 5,000 and
your problem is resolved.
13. Class 11 Migrating the DB edit: Now we're ready
to migrate the DB and see the domain models
being created on the database. Let's start by editing
the settings dot JN and giving it
a proper DB name. Here, I'm going to
call it customers. Now, we need to install
the package, Microsoft, dot entity framework core dot design on the
start solution, so it knows how to run the
migration by configuring the DB context that leaves on the data project using the
settings dot JCO file. Here, it's important
to highlight that the dependence is installed
on the start project, not on the web EPI, keeping it clean and decoupled from external frameworks
and libraries. Remember that the
start project is the dirtiest component of the
architecture. It is okay. We want to protect the core of the application, but as well, we don't want one adapter
being aware of another or having to install
dependencies that are necessary for
the other adapter, like tit framework that is
required for the data adapter. Now, on the package
managed consul tab, let's run a comment to create
the initial migration. Here, the parameter
project data is designating the migration file to live on the data project, which is where it should belong. All the intelligence
related to the DB should live in this project,
including schema. Remember, we might want to swap to a completely different
DB adapter that might have a different
schema or even be schemas like a documented
oriented database. As we can see, the
migrations folder was created inside
of the Data project, and the initial
create CS file was created describing the
initial scheme of the DB, including the customer table.
14. Class 12 Summary overview edit: Now we are in good shape to proceed to implement
the use cases. So far, we've structured the project in a
way that the core of the application is completely decoupled from the
external components. We have our web API
completely decoupled from the other adapters and from libraries and frameworks that it doesn't really
need to depend on, making it much simpler
to swap adapters. The start component here
is a crucial element. Since it is the entry
point of the application, being responsible to instanciate the DB context with the configurations
from the upsettings, and to prepare the
service collection with all the adapters that the
application needs to function. Then using it to instantiate the web API project
just before running it. Imagine a scenario with
multiple adapters like an SMS sender that uses
two API to send SMSs, and an e mail sender that
just send e mails with Gmail. The start application
will be responsible for configuring the
database, email sender, and the SMS adapter before
starting the web API, which would just be injected
with the service collection, then started as expected. Notice the arrows coming from
the adapter in two core, highlighting that
the dependencies are from the adapter in two core so that they know
what interfaces or contracts are there
to be implemented. All the rules are
dictated by the core app. For the same scenario, imagine that you want to run another instance of your
e commerce application. A completely different
company with the same features,
same source code. However, the second company
wants to send e mails using to MO and SMSs
using book SMS instead, and even using DAPA as the DB framework
connecting to our ROC DB. Then instead of implementing the same functionalities with the different libraries
on the same.net project, we're going to
have a new adapter implemented
individually and then have the start
application to pick what adapter it wants
to use for that client. Then from the upsettings, it reads the configuration, instantiate the adapters,
then have them to be injected to the web application just before they get started. Notice how the plug in play aspect of it
is much more clear now and how we can see all the advantage
that it brings to us. Here, I prepared a simple
code of the shown diagram. Inside ports, we
have two extra ports defined ISMS service
and e mail service, both with methods that
would be called by core in order to send
an SMS or e mail. On the adapters folder, we have a.net project
for each adapter. It doesn't need to
be an entire.net project if you don't want to. It could just be a class
on a shared.net project. However, in a real
world application, the advantage of having them in a separate project
is that you can publish it as a library and have it to be
consumed individually, even have its own development
and release process being completely decoupled
of the bigger project. Another advantage
is that you only need to install new
get package like TEU or entity Framework or
Dapper on each.net project, separately, having less
dependency on each project, facilitating the overall
management of the versions. At the adapters folder, we can see the implementation
of the GMO Email sender. It has a class called email sender service that implements the I E
mail service port. And send e mail A sync with a logic that knows how to
send an Emo using GMO. The settings dot JCO has the
configuration object that is used by the starch project to instantiate the
EO sender adapter. The same goes for the TU SMS sender adapter that has a class called SMS sender service that implements the ISMS
service interface. The starch project reads the TIO configuration
from the upsettings and instantiate the adapter before injecting
into the web API. This is just a fake implementation
of these two services. It's not meant for production. It is only for educational
purpose and should be a practical demonstration of the example from the diagram. All the source code
for this example is on a separated branch called
Twilio G mail adapters. On another branch called
book SMS Hotmail, we can see the
other example where we run the same source code in a separated instance
that wants to use other service to send
SMS and e mails. Notice that here we have the two start project that is responsible
for organizing all of that on the dependency
injection side and having the appropriated
upsetting dot jason with the right config. Now, on the adapters folder, we can see the
Hotmail email sender and the book AMA
sender adapters. That just like the
previous example, are implementing the
interface dictated by core, and all the library dependencies are installed on them directly. Let's run the A to start project to see it
creating the instances of each adapter and injecting them into the web API
before running it. This is how we can reuse all the business logic and
presentation layers to run multiple versions of our
solution without having to modify the most important
parts of our system, and preserving the
presentation layer, which here is the web API from having to know or depend on any other adapters
or dependencies that have nothing to do with
presenting the information, even being the Des project
of the entire solution, Notice that they start and two start projects only depends on the adapters that
it needs to know about, which are the ones required for the application to function. It reduces building time, unnecessary dependence and keep it cleaner and
easier to maintain.
15. Class 13 Developing the first UseCase CreateCustomer using TDD edit: Okay. Now back to
the master branch, clean from the adapters
used on the pass video that we were using to explain the plug and play aspect
of this architecture. Let's start working on the first real use case
of the application, which is the create customer. Let's imagine the
following scenario or the following user story. The user of the system once
you create a new customer, it opens the create
user screen, click new, and the form opens up with the following field
customer name, last name, in MU document, date of birth, and so on. All fields are required. Name and last name have to be at least two characters long. The mail should be
a valid e mail. Document should be
of type passport, driver license, or
birth certificate. However, if the user
is ordered an AT, it cannot be birth certificate. Once the customer is created
and saved to the DB, its ID is returned. Once the user story is defined, we can start coding. Here is a great
opportunity for us to use test driven
development or TDD. Test driven development is a software development
process where developers write test for their code before actually
writing the code itself. This approach to development has several benefits for
software design. First, improves quality. By writing tests first, developers can verify
that their code meets the desired requirements
and functions as expected. This helps to catch bugs
and other issues early on, leading to a higher quality
of the final product. Second, it increases confidence. TDD provides developers
with a safety net that they can use
to make change to their code without breaking
the existing functionality. This gives them the
confidence to make change and refactor
their code with ease. Third, better design. TDD encourages developers
to write modular, flexible and reusable code. By writing the test first, they can see the interactions
between different parts of the code and identify potential
design problems early on. Fourth, faster development. By catching issues early in
the development process, TDD saves time in the
long run by avoiding the needs for
debugging and fixing problems later in the
development cycle. Fifth, collaboration. TDD can be beneficial for collaboration and communication
between the team members. Tests serve as a clear
and concise specification for what the code should do, making it easy for
other developers to understand and to
contribute to it. All of those
benefits being said, L et's by creating a test
project for the public website, since this test will be asserted use case
functionalities. Inside of the test folder, let's create a X it project
called Publicit Tests. I'll get rid of the global
in parks and rename my test file to create
customer command test. Since we're going
to be targeting the create customer command
created previously. Since the actual code
doesn't exist yet. TDD allow us to
write code against a theoretical ideal version of what we expect
that interface to be. This is great because we don't
get ourselves attached to the implementation
details because it just doesn't exist yet. This way, we focus on the
abstraction and on the contract that we want to
establish between the consumer and
the application. Let's create the first test
to cover the happy path. I will name it should
create customer. Then let's create an instance of create customer
command handler. And create customer command. And add the reference for
the public website project. Now, let's prepare the customer do with all required data. That being done, let's call
handle on the command handler passing the command assigning the results to a
variable called result. Now, for the part, we expect the
success to be true, the message to be customer
created successfully, and the return customer
DTO to have an ID. Some people don't like having multiple asserts on
the same unit tests. Here, for educational purposes, I'll have more than one at once, but feel free to split them into multiple tests if you like. To split them into multiple
tests if you like. Our command handler
is missing to be injected with an adapter
to access the DB. We have created an interface before I provide customer name. Just beside that, I will create another one called
I create customer that will serve as a
port to be implemented by the adapters to provide
this functionalogy. Since our unit test doesn't
need to know about the DB. Or open a connection in order
to instantiate the adapter. We're going to mark it. A MC
in unit testing is an object that simulates the behavior of a real object in
a controlled way. MCs are used to
isolate the unit being tested and remove dependence on other parts of the system, allowing the test to focus on a single well defined unit of functionality or
unit of behavior. For example, if a class
that you're testing makes a code to a database
or a web service, you can use a MC object
to simulate the database or web service instead
of rely on the real one. The MC object can return
a predetermined response, allowing us to test how
your code behaves in various scenarios without
having to actually connect to the database or web service. For that, we're going
to use a library called MOQ that
helps us with that. Let's install the MOC library
into the Test project. Now we're going to
create a Mock version of our brand new created
port or interface. Let's define a customer
Mo object using MC. After that, we set it
up saying that when we call create customer a
sync with the customer, it returns the number one. This is basically to abstract
having to go to the DB. Assuming that if you give
it a valid customer, it will save to the DB and it will return a
valid integer ID. With this, we can validate that our command will grab that ID, we assign it to
the customer DTO, and will return it as a
part of the response, which is a requirement
in this context, since the color of
the API needs to know what ID was
assigned to the record. This way, we can validate the entire thing without
having to really go to the D. Now we can just pass the mock
interface into our cost. Now, let's pass the created mock to the constructor of
the handler object. Perfect. Our test is ready. However, like it is
expected in DDD, it will fail because obviously
it is not implemented yet. The next step is
to make it work. On our create customer
command handler, let's write the code to
translate the DTO into a domain object and run the appropriated
validations against it. After that, we call create Customer A sync from the I
create customer interface created previously and assigned the returned ID from the database into
the customer object. If everything went okay, we assemble a new
customer response, setting success to true, give it a nice message
and assign data with the DTO version of
the dated customer object. For this, let's create
a static method called M to DTO inside
of the customer DTOs. This method will only convert from the main
object into DTO. It is a place that knows how
to perform this operation. It is easy to write
unit tests against it, and I usually like
this approach. However, feel free to use Auto Mapper or any other library that you might be accustomed to. I will convert the domain
method as well into a static method and do the appropriated
refactoring for it. This way, customer class
will provide us both ways of converting the modes without need of an
instance of the object. Now we should be able to run the desk and see it passing as a Now, let's write tests to assert when things does not
go into the HappyPF. For example, when the domain
model validation returns a invalid customer
document exception or missing required information
exception and so on. Remember that now we are writing unit tests for the use
of the application, and consider that they are being consumed a agreed
contract or interface, which in this case is the
create customer command. When writing the test first, we can dictate how best we would like to
interact with this. It gives us the
opportunity to model things like the
inputs and outputs, including what should happen
when things goes wrong, like when they use trie
to create a record with an invalid document number
or an invalid e mail. Let's start writing the first negative
test call shod return invalid person ID when
document is not valid. Just like in the previous test, we arrange it by creating a mock version of the I
create customer interface. This time we don't
need to set up anything since the
validation of customer happens before any method of
these interface is called. Then we prepare a customer DTO with all the data in
the correct form, except for the
customer document. On the act part of the test, we send the command and save
the result into a variable. After that, we only need to
assert that success is false. The correct message
was returned, and the Eh code is
what we expect, which in this case, is invalid person ID. As expected in TDD, this now will fail because the implementation
is not ready yet. Now, let's make it work. For this, on the
command handler, I like to wrap the cod to the DB into a tri
catch block to capture possible IO or DB
exceptions as well to catch the domain
exceptions that will be thrown by the customer
domain model. This way, we can catch them
specifically without having a single generic big apt block where we don't really
know what's going on. Lock, I will capture invalid customer
document exceptions, and I will return a
nice customer response appropriately mapped to the correct domain
code and message. After that, a generic
exception block in case something unexpected. Now, let's run the test and
make sure it works correctly. After that, let's repeat the same process for the
other two exceptions, missing required
information exception and invalid email of exception. Perfect. Now we've covered
not only the happy path, but as well some of
the known happy paths. Guaranteeing that the
contract established between the color and our application
will be kept in resp. Feel free to add as
many other tests as you like covering
the remaining sis. But at this stage, our com shod be to be
consumed by the API layer.
16. Class 13.1: Now that our command and command the handler are
covered by unit tests, let's use it for the most
important adapter that we have so far on our
application, the web API. But first, let's take
a look at something. In pure hexagonal architecture, we would not have a command and command the handler
per se, but instead, a simple interface
that would be injected into the web API and
would be implemented by a primary adapter
leaving inside of the public website project on the core of the application. However, in this course,
as mentioned previously, we are leveraging
a little bit of CQRS to separate
commands from queries. Instead of having a primary
adapt conversion request into something that the
domain can understand. We are translating
the request into a command and passing that
command to a command handler. Both approaches are valid and it's up to you
what you want to do. Remember, the goal here with CQRS is to give you
the ability to have commands writing
to one specific DB that's configuring
optimize for rights, and you have your
queries reading from another DB like an CCO DB with the projections of
your aggregates or even at a de normalized
relational database. We are not covering
here in this course, but it will give you all
the tools and techniques to achieve that on a
real world project in case you're ever needed. The command on the
command handler interface also services as
a contract on how any adapter consuming our
application can create a customer and what it
can expect as return. Let's start connecting
the dots and having our API to send create customer commands in order
to create new customers. The first step is to create an API controller for customers. For this, let's
create a file called customer Controller on the
web API Controller folder. Let's delete the
weather forecast file that cams by the foe when
we created the web API. Next, we inherit from
controller base and decorated with API controller
and Route type controller. Then I will delete
the def controller, and I will define a controller
type HTP posted customer that will receive a customer
DTO as a parameter. At this point, remember to add the reference from the web
BPI to the public website. After that, I will install
the mediator library into the web project since it would be needed to
send the commands. L et's define a simple response. We can do a quick test to see if our end point is working. Let's open postmen or any other API clients
that you might prefer, and let's do a simple post call against the customer end point. Passing a JC object representing a customer to see if it
hits the break point. Perfect. Everything seems
to be working as expected. Now, let's convert the
request into a command. First, create a constructor on the controller and
inject i mediator. Inside of the controller, create an instance
of the command. Call mediator d send
and pass the command. Here, we can also capture the different specific
types of responses and decide what type of GGP code and message
we want to return. See how I individually capture invalid person ID and missing required
information errors. Change the message that he is
the user for the first one. Here, I could call a
translation mechanism for exam that would return the message for the user on
his own language. Now, on the data project, create a secondary
adapter called customer creator that implements the interface I create customer. The interface that my
domain imposes upon any adapter that intends to save or read data from the DB. This way, we isolate our domain from whatever
change happens DBs, add the reference to
the public website. Now, let's implement
the interface. Here, we define a
constructor that is injected with the
customer DV context. Finally, we create a method call create customer A sync that adds the object to the customer
context and then gs. Perfect. Now, we've implemented all the necessary primary
and secondary adapters. Let's now wire the
dependency injection. On the Start module, open the program CS and
add the following lines. Install mediator, if necessary. Finally, add a reference
to the public website. Let's do the same
modification on our test. Since now, we are returning the entire object updated
instead of the ID. Now let's go back to our
command handler and make sure that we capture the
updated object being returned. Now, let's test it with pot. Let's make a API call, and check if the data will be successfully saved to the DB. Perfect. Now here we see that the object was successfully
saved. It has the ID three. If we go back to VSO
Studio, the customer table, and we go view data, we see that the mic is successfully saved
to the database. We can also test some
edge cases like when we don't give it a name or some of the required informations. We see that the message
is successfully parsed. It got a AHR code, a message, it says that
its success false. We can do the same
for the document IDs. Please verify that your
document is correct. This way, we
guarantee that we not only validate the business
logic within the domain, but also we parse
it correctly and we decide which ho message
were going to return, which HTP code we
want to return. As we can see here, return a bad request, starts 400, which is
exactly what we want. Perfect. Now we've converted successfully a command
into a domain command, pass it to the command handler. Commando Handler pretty much
didn't have any change. Only change we had, which was intentional was that instead of returning an ID, we return the entire
object updated. We could just go with
what we had before, but I think this
approach was just simpler and what it made sense. We did the change here, we changed the test
and everything. He's still working fine.
17. Class 14 Authentication and Authorization: So now we are going
to develop our query. In CQRS, queries are used to
only consud the database. Queri should not
have an effect on the DB or on the entities
that they are reading. Usually, CQRS queries would be injected with a concrete
implementation of the DB access layer
that are connected to the DB and that are
optimized for readings. That could be projections
of your aggregates, a cache in DB, or even a de normalized
relation of database. The whole idea here is to
disconnect reads from rights, and if needed, you can optimize
where it fetches date. Let's start by defining a HTTP GT controller on
our customer controller. Here, I will create a new method and I'll
decorate it with HGP GAT. Also, now I should decorate the previous method
with HGP post. Inside of our method, I instantiate a get
customer query and passed the ID that was
given during the request. After that, I called the mediator to send
the query to a handler. Now, let's implement
the metal to handle of our query handler. First, we define a
constructor that is injected with the interface
called I provided customer. Then on our handle method, let's define a catch block, like we did on our
command handler. And within it, we're going to call a method
called customer, passing the ID coming
from the query. Here, we start a customer return from the DB and assembles a customer response message passing the DTO back to
the primary adapter. Finally, on our exception block, we can capture the
different types of exceptions if
that's the case. But here, I'll only
have a simple catch returning a regular customer response with a simple message. If you want, you can map all of the other exception types
just like we did on handler. Since this is a
generic exception, I will define a
unknown R code that we didn't have before inside
of our her code object. Now, let's open the ports
file and define the I provide customer data interface with the G customer data method. Finally, on our data project, that's create a customer
data provider that will be responsible for
implementing this interface. Just like on our other adapter, we define a constructor, inject the DB context and
implement the interface. Here is where you would inject a different DB
context connected to a different DV if
you want to have your queries fetching from
a different database. Notice how it is
all very simple now and your domain doesn't need
to know anything about it. Great. The final step
is to wire up what is the query handler responsible for handling this
type of queries. For that, let's go to the program CS file on the
start up application and add an extra
dependency. Perfect. Now we are ready to test. Let's run our API and make
a get call from the post. As we can see, the
response is coming back. However, without the
nested customer data. To fix that, we need to add a small configuration
to our program dot CS so it knows how to
properly serialize objects with nested objects. Perfect. Now let's
test it again. As you can see, now
our response is a fully comprehensive object
that not only returns the, but also a response
with proper message, status, and everything else
that a front can rely on.
18. Class 15 Authentication and Authorization: Now I want to demonstrate how agonal architecture
deals with authentication, authorization, and
permissions in general. If you think about
it, the inside part of your hexagon does not care how you authenticate a user or how you
get his permissions. Everything he cares
about is that you should tell what permissions
the authenticated user has and the rest
will be handled by the application rules that lives inside of the core
of the application. Prepared a simple demo on our.net application
that provides a way for the user
to authenticate itself using user
name and password. The application will return
a bare token that should be used to call the
protected end points. The Ba token when generated, will have all the information about rows and permissions
that the user has. This information would be
passed to the hexagon. Here, once more, we need to pay attention on how we separate responsibilities on our
application so that we do not couple the hexagon
with external dependencies. Or we take a look at the
code, a quick disclaimer. This is only for
demonstration purpose. The authentication and
authorization mechanism that I implemented here are
not meant for production. In fact, I don't recommend to anyone to implement
authorization mechanisms, since there are plenty
of solutions out there ready to be used that
are specialized at this. One here only has the very
minimum we need to understand how to separate the
responsibilities of the hexagon architecture. The concepts that I will show here will be exactly the same, whatever the solution you choose for authentication
and authorization. Everything starts on
this lp controller. It provides a login
endpoint that expects a logino object to be passed containing the
user name and password. The controller obviously
leaves on the web application. It will be injected with an
Alp provider adapter that is responsible to check if the user name and password
are correct or not, and then issues the token. Top speragpcoa
pala, most fastak. Copy, it will be
injected mosask. Alza. Here we are not breaking the rule of adapters depending
on another adapter. Since the web application
only knows about the I out provider interface that is
defined on the public website. In other words, the public
website is imposing a contract on whoever wants
to consume it, saying that, if you want to consume
the public website, this is the contract you should follow in order to work a
straight authentication. Also, this contract tells what the adapter that we
implement it should obey. Here, we can see the
interface is being implemented by an adapter
called simple Of provider. This provider orchestrates
the validation of the credentials and
generating the token. It first uses a user
provided that gets injected via dependence injection to check if the credentials
are correct or not. Here, I'm only hardcoding
user naming password, but you could easily
fetch it from the DB, or you can use an OL server that provides an
API for example. The implementation details
are not important. The important is that we
check the credentials, and if they are correct, we return a response object
containing the user object. After that, we read
the upsettings to get the JWT key to generate
the barrier token. During this process,
we add permissions and row claims to the token
that will be generated. The user provider when
returning the user object, it will include all
the permissions related to the customer, like create customer,
read customer, delete customer, and
update customer. I'm also including Rows
manager and market supervisor. Again, you can fetch
it from the DB or from an external user permission
system that you have it. Finally, we return
the final token with a nice user response message
back to the API color. On the start program dot CS, I've added the
necessary configuration for the JWT token generation. This also reads from
the upsetting to get the key issuer and
audience. Perfect. This is the mechanism
for generating the token with the user
roles and permissions. Here, if we generate
token from Postman and encrypted on a website
like JWT dot IO, we can see what the
token is made of. Note that all permissions
and rules were added. Notice how this process of
authenticating a user or generating a token has nothing to do with our
business logic yet. It is all on the adapter
side of the story. Specifically, the web
API and the of adapters. The only thing that we
have from the internals of the hexagon are the contracts that are imposed by
our application, and the user DTO. This is what we want.
We want the adapters to adapt to our contracts,
not the opposite. Don't want the hexagon being coupled to a certain
way of implementing authentication and
authorization because this can change tomorrow. Great. This is everything we need to jump on the
hexagon part with the configuration for
the JWT token that we added to the start
project program dots. We can now decorate the controllers with the
authorized decorator, making it close for
unauthenticated requests. This means that every time
a request goes through, it will have the users token. With that in mind, I've
created this method called populate user
permissions that basically parses the rules
and permissions from the authenticated user
into a user DTO object. Here, as an example
implementation, I've enhanced the GT
customer data controller to pass a user DTO to
the GT customer query. On the GAT customer
query handler, I've added a method called
validate user permissions, that check if the user pass is a manager in order to allow
the request to go through. Otherwise, it will throw an
application level exception. Now, let's see it working. First, I will try to call the
customer end point without a token and see that we get a for one unauthorized
response back. Now, let's use the
new flogging endpoint to generate a new token. Here, I'm doing a
post request with my user name and
password on the body, and I get a token back. Now, if I pass this token, we can get the same response as before with all the
customer details rendered. Now, let's edit
our user provider, and let's remove the manager. Let's run the application again, generate a new token, and with that new token, we call the customer
endpoint once more. See that now I get a
41 again in a message saying that user does not have the permissions to
query that information. If we get back to
the query handler, we are throwing an exception. User does not have permissions
to see the record that is and mapped to a
brand new wh code. This code is then expected
on the controller that is mapping it to the perfect
HDP status response. I. Beautiful. All the components are working together doing its part within their
boundaries with no coping. Here we have some very important points to pay attention to. First, notice how the
valid permission method is on the application level. That is because obviously, this type of permission
might change depending on the application that is consuming the domain. The private website example. You might allow any user with the rich customer permission
to access customer data, since they will all
probably be employees. On that application, you would implement the same method
in a different way. Also, notice that the
exception we are throwing is specific to the
application layer and not to the domain, and that is for the exact same reason as I just mentioned. When writing code,
we should always be cautious and pay attention
to where we place code. It might seems hard
or too pedantic, but it's the price
we have to pay for good quality software that can be maintained and
evolved over time.
19. Course Conclusion: So that's it. You've came
to the end of the course. I hope you have learned
a lot during the course. I hope you have grasped the
main concepts which are separating the core of your application from
the rest of the world. I hope that you learn how to use domain driven development
to model your core and business logic and your application logic that
consumes your business logic. And also, I hope that you've learned how to leverage CQRS, to do commands to modify,
to transform data orchestrating your
application and domain logic. And I hope you also
got the grasp of how separating
dependencies can help your application to be testable, especially
unit testable. Anyways, what was great
to have you here. I've enjoyed recording
this course, making this course, and I hope you've enjoyed
consuming it as well. Now, go ahead and practice
at your own time. Try to apply these concepts at your company or to your team. And if you have any questions, don't hesitate to reach me
out at my social media by